DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2019-03-14T17:11:26ZDataONE Tasks
Redmine Infrastructure - Story #8779 (New): ForesiteResourceMap performance issuehttps://redmine.dataone.org/issues/87792019-03-14T17:11:26ZRob Nahfrnahf@epscor.unm.edu
<p>Profiling reveals that much time is spent in IndexVisibilityDelegate, and it seemingly is called twice unnecessarily, first in _init, second in getAllResourceIDs().</p>
<p>This class in general is not well documented and has some confusing traversal code, so it is difficult to assess what exactly is going on. It also seems to be a misleading encapsulation of data, in that it attempts to filter out resource map members based on current system metadata properties (archived or not), but that's not mentioned at all in the sparse javadocs.</p>
<p>the code needs to be reviewed to make sure no unnecessary calls are made.<br>
If resource map checking (for completeness) is not going to be done anymore, this class probably should be deprecated or removed.</p>
Infrastructure - Story #8702 (New): Indexing Refactor Strategyhttps://redmine.dataone.org/issues/87022018-09-21T22:42:48ZRob Nahfrnahf@epscor.unm.edu
<p>Indexing is non-performing and has some inconsistency problems.</p>
<p>A solution was developed that addresses the main issues, and involves the creation of a separate solr core for relationships (the resource maps). Initially, the solution will create the separate core as a behind the scenes reference for the main search index. Relationships (resource_map, documents, isDocumentedBy) will still be copied into the main search record.</p>
<p>Additionally, archived objects will not be removed from the index, but the field archived will be added to the schema.</p>
<p>The new logic for processing resource maps and archiving objects should remove many of the inefficient checks that cause records to be reindexed.</p>
<p>The main phases for development will be:</p>
<ol>
<li>refactor out the custom solr client for use of the standard org.apache.solrj-client.<br></li>
<li>migrate the schema to include archived field & introduce relationships core. Refactor the resourcemap subprocessor to use it, and trigger relationship tasks.</li>
<li>refactor the delete subprocessor (for archived records) & add the search handler.</li>
</ol>
Infrastructure - Story #8525 (In Progress): timeout exceptions thrown from Hazelcast disable sync...https://redmine.dataone.org/issues/85252018-03-27T22:36:54ZRob Nahfrnahf@epscor.unm.edu
<p>Very occasionally, synchronization disables itself when RuntimeExceptions bubble up. The most common of these is when the Hazelcast client seemingly disconnects, or can't complete an operation, and a java.util.concurrent.TimeoutException is thrown.</p>
<p>These are usually due to network problems, as evidenced by timeout exceptions appearing in both the Metacat hazelcast-storage.log files as well as d1-processing logs.</p>
<p>Temporary problems like this should be recoverable, and so a retry or bypass for those timeouts should be implemented. It's not clear whether or not a new HazelcastClient should be instantiated, or whether the same client is still usable. (Is the client tightly bound to a session, or does it recover?) If a new client is needed, preliminary searching through the code indicates that refactoring the HazelcastClientFactory.getProcessingClient() method is only used in a few places, and the singleton behavior it uses can be sidestepped by removing the method and replacing it with a getLock() wrapper method (that seems to be the dominant use case for it). See the newer SyncQueueFacade in d1_synchronization for guidance on that. If the client is never exposed, it can be refreshed as needed.</p>
<pre>root@cn-unm-1:/var/metacat/logs# grep FATAL hazelcast-storage.log.1
[FATAL] 2018-03-27 03:15:19,380 (BaseManager$2:run:1402) [64.106.40.6]:5701 [DataONE] Caught error while calling event listener; cause: [CONCURRENT_MAP_CONTAINS_KEY] Operation Timeout (with no response!): 0
</pre><pre>[ERROR] 2018-03-27 03:15:19,781 [ProcessDaemonTask1] (SyncObjectTaskManager:run:84) java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent
.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.dataone.cn.batch.synchronization.SyncObjectTaskManager.run(SyncObjectTaskManager.java:76)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at com.hazelcast.impl.ClientServiceException.readData(ClientServiceException.java:63)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:104)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:79)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:121)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:156)
at com.hazelcast.client.ClientThreadContext.toObject(ClientThreadContext.java:72)
at com.hazelcast.client.IOUtil.toObject(IOUtil.java:34)
at com.hazelcast.client.ProxyHelper.getValue(ProxyHelper.java:186)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:146)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:140)
at com.hazelcast.client.QueueClientProxy.innerPoll(QueueClientProxy.java:115)
at com.hazelcast.client.QueueClientProxy.poll(QueueClientProxy.java:111)
at org.dataone.cn.batch.synchronization.type.SyncQueueFacade.poll(SyncQueueFacade.java:231)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:131)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:73)
</pre> Infrastructure - Story #8504 (New): Support creation of data citation record from solr recordhttps://redmine.dataone.org/issues/85042018-03-19T21:53:13ZDave Vieglaisdave.vieglais@gmail.com
<p>The goal of this story is to ensure that elements in the solr search schema are available and appropriately populated to support generation of DataCite version 4.x or later records.</p>
<p>By ensuring support for this schema, it can also be asserted that suitable citation metadata can be provided in landing pages and other renderings of content provided by DataONE.</p>
<p>Resources:</p>
<ul>
<li><a href="https://schema.datacite.org/meta/kernel-4.1/" class="external">DataCite Schema version 4</a></li>
<li><a href="http://indexer-documentation.readthedocs.io/en/latest/generated/solr_schema.html" class="external">DataONE solr Search fields</a></li>
<li><a href="https://rd-alliance.org/group/data-citation-wg/outcomes/data-citation-recommendation.html" class="external">RDA Data Citation Recommendations</a></li>
</ul>
Infrastructure - Story #8363 (New): indexer shutdown generates index taskshttps://redmine.dataone.org/issues/83632018-02-12T21:42:22ZRob Nahfrnahf@epscor.unm.edu
<p>Seen in STAGE, somehow the index processor generated about 15k tasks (after processing 215k tasks over the weekend) during a service stop. It also created about 12.5 failures. Before trying to stop services, this the status of postgres:</p>
<pre>d1-index-queue=# select status, count(*) from index_task group by status;
status | count
------------+-------
NEW | 5
FAILED | 1659
IN PROCESS | 367
(3 rows)
</pre>
<p>Execution of <code>/etc/init.d/d1-index-task-processor stop</code> timed out.<br>
I performed <code>/etc/init.d/d1-index-task-generator stop</code> successfully, getting an <code>[OK]</code><br>
then I performed <code>/etc/init.d/d1-processing stop</code> on UCSB, also getting an '<code>[OK]</code></p>
<p>examination of the indexing log file a couple minuted later showed this:</p>
<pre>[ INFO] 2018-02-12 20:36:08,975 (IndexTaskProcessor:logProcessorLoad:245) new tasks:0, tasks previously failed: 1661
[ INFO] 2018-02-12 20:36:09,361 (IndexTaskProcessor:processFailedIndexTaskQueue:226) IndexTaskProcessor.processFailedIndexTaskQueue with size 0
[ WARN] 2018-02-12 20:36:09,361 (IndexTaskProcessorJob:execute:58) processing job [org.dataone.cn.index.processor.IndexTaskProcessorJob@515de84e] finished execution of index task processor [org.dataone.cn.index.processor.IndexTaskProcessor@2062
1d44]
[ WARN] 2018-02-12 20:36:26,571 (IndexTaskProcessorScheduler:stop:99) stopping index task processor quartz scheduler [org.dataone.cn.index.processor.IndexTaskProcessorScheduler@103bbd22] ...
[ INFO] 2018-02-12 20:36:26,572 (QuartzScheduler:standby:572) Scheduler QuartzScheduler_$_NON_CLUSTERED paused.
[ INFO] 2018-02-12 20:36:26,572 (IndexTaskProcessorScheduler:stop:111) Scheuler.interrupt method can't succeed to interrupt the d1 index job and the static method IndexTaskProcessorJob.interruptCurrent() will be called.
[ WARN] 2018-02-12 20:36:26,572 (IndexTaskProcessorJob:interruptCurrent:92) IndexTaskProcessorJob class [1806183035] interruptCurrent called, shutting down processor [org.dataone.cn.index.processor.IndexTaskProcessor@20621d44]
[ WARN] 2018-02-12 20:36:26,573 (IndexTaskProcessor:shutdownExecutor:952) processor [org.dataone.cn.index.processor.IndexTaskProcessor@20621d44] Shutting down the ExecutorService. Will allow active tasks to finish; will cancel submitted tasks
and return them to NEW status, wait for active tasks to finish, then return any remaining task not yet submitted to NEW status....
[ WARN] 2018-02-12 20:36:26,573 (IndexTaskProcessor:shutdownExecutor:955) ...1.) closing ExecutorService to new tasks...
[ WARN] 2018-02-12 20:36:26,574 (IndexTaskProcessor:shutdownExecutor:957) ...2.) cancelling cancellable futures...
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:958) ...number of futures: 591344
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:959) ... number of tasks in futures map: 591344
</pre>
<p>15 minutes or so later, the log showed this:</p>
<pre>[ WARN] 2018-02-12 20:36:26,573 (IndexTaskProcessor:shutdownExecutor:955) ...1.) closing ExecutorService to new tasks...
[ WARN] 2018-02-12 20:36:26,574 (IndexTaskProcessor:shutdownExecutor:957) ...2.) cancelling cancellable futures...
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:958) ...number of futures: 591344
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:959) ... number of tasks in futures map: 591344
[ WARN] 2018-02-12 20:52:30,811 (IndexTaskProcessor:shutdownExecutor:988) ...number of (cancellable) runnables/tasks reset to new: 0
[ WARN] 2018-02-12 20:52:30,811 (IndexTaskProcessor:shutdownExecutor:989) ...number of (cancellable) runnables not mapped to tasks: 0
[ WARN] 2018-02-12 20:52:30,811 (IndexTaskProcessor:shutdownExecutor:990) ...number of uncancellable runnables: 591344 (completed or in process)
[ WARN] 2018-02-12 20:52:30,812 (IndexTaskProcessor:shutdownExecutor:993) ...3.) waiting (with timeout) for active futures to finish...
[ WARN] 2018-02-12 20:52:30,812 (IndexTaskProcessor:shutdownExecutor:998) ...4.) Reviewing remaining uncancellables to check for completion, returning incomplete ones to NEW status...
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1026) ...5.) Calling shutdownNow on the executor service.
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1028) ... .... number of runnables still waiting: 0
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1030) ...6.) returning preSubmitted tasks to NEW status...
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1031) ... .... number of preSubmitted tasks: 34735
[ INFO] 2018-02-12 20:52:30,835 (IndexTask:markNew:454) Even tough it was masked new, it is still considered failed for id testGetPackage_2017119234441164 since it was tried to many times.
[ERROR] 2018-02-12 20:52:30,891 (IndexTaskProcessor:shutdownExecutor:1038) ....... Exception thrown trying to return task to NEW status for pid: testGetPackage_2017119234441164
org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [org.dataone.cn.index.task.IndexTask] with identifier [13071797]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [org.dataone.cn.index.task.IndexTask#13071797]
...
[ INFO] 2018-02-12 20:54:19,618 (IndexTask:markNew:454) Even tough it was masked new, it is still considered failed for id P3_201622214921901 since it was tried to many times.
[ WARN] 2018-02-12 20:54:19,621 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid P3_201622214921901returned to NEW status.
[ WARN] 2018-02-12 20:54:19,623 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid resource_map_doi:10.5065/D6VD6WFPreturned to NEW status.
[ INFO] 2018-02-12 20:54:19,623 (IndexTask:markNew:454) Even tough it was masked new, it is still considered failed for id testGetPackage_NotAuthorized_201710605522454 since it was tried to many times.
[ WARN] 2018-02-12 20:54:19,626 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid testGetPackage_NotAuthorized_201710605522454returned to NEW status.
[ WARN] 2018-02-12 20:54:19,628 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid resource_map_urn:uuid:d3606ccb-2d50-4723-ae45-c0d01b817e48returned to NEW status.
[ WARN] 2018-02-12 20:54:19,631 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid resource_map_doi:10.18739/A2165Freturned to NEW status.
[ WARN] 2018-02-12 20:54:19,631 (IndexTaskProcessor:shutdownExecutor:1041) ............7.) DONE with shutting down IndexTaskProcessor.
[ INFO] 2018-02-12 20:54:19,631 (IndexTaskProcessorScheduler:stop:113) The scheuler.interrupt method seems not interrupt the d1 index job and the static method IndexTaskProcessorJob.interruptCurrent() was called.
[ WARN] 2018-02-12 20:54:19,632 (IndexTaskProcessorScheduler:stop:128) Job scheduler [org.dataone.cn.index.processor.IndexTaskProcessorScheduler@103bbd22] finished executing all jobs. The d1-index-processor shut down sucessfully.============================================
</pre>
<p>but postgres yielded this:</p>
<pre>d1-index-queue=# select status, count(*) from index_task group by status;
status | count
--------+-------
NEW | 15367
FAILED | 14032
(2 rows)
</pre>
<p>indexer shutdowns are a stubborn problem...</p>
Infrastructure - Story #8307 (New): Check node subject on node registration and subsequent callshttps://redmine.dataone.org/issues/83072018-02-06T20:04:39ZDave Vieglaisdave.vieglais@gmail.com
<p>The <code>/node/subject</code> entry of the node document should match the subject of the certificate used to register the node (unless the call is being made by a CN certificate).</p>
Infrastructure - Story #8234 (New): Use University of Kansas ORCID membership to support authenti...https://redmine.dataone.org/issues/82342018-01-09T02:00:28ZDave Vieglaisdave.vieglais@gmail.com
<p><a href="https://orcid.org/members/001G000001CAkZgIAL-university-of-kansas" class="external">KU is a premium ORCID member</a> as a member of the Greater Western Library Alliance (GWLA). As a result, KU has access to five ORCID API keys. One is currently in use for the KU DSpace instance.</p>
<p>Goal of this story is to leverage on of the remaining API keys to support ORCID authentication in the DataONE production environment.</p>
Infrastructure - Story #8227 (In Progress): ExceptionHandler regurgitates long html pages into th...https://redmine.dataone.org/issues/82272017-12-13T21:19:23ZRob Nahfrnahf@epscor.unm.edu
<p>While useful to know what was returned in the error response when it was not the correct response, HTML pages can be verbose and include excessive markup that's not useful. Especially when a GMN MN is in debugging mode and there is a systematic error being returned (like during an authentication issue), these logged html pages can end up being 75% of the log files, and cause meaningful log lines from scrolling off the end of the log rotation.</p>
<p>An option should be provided to limit the amount of characters being returned in the ServiceFailure.</p>
<p>Options are to:<br>
1. eliminate the message body altogether<br>
2. truncate the message body<br>
3. only print the visible parts of the HTML (remove and elements)<br>
4. combination of 2 & 3</p>
<p>since a new feature, develop in trunk.</p>
Infrastructure - Story #8173 (New): add checks for retrograde systemMetadata changeshttps://redmine.dataone.org/issues/81732017-09-01T19:42:33ZRob Nahfrnahf@epscor.unm.edu
<p>with the ability to prioritize and the introduction of parallelized index task processing, the effective queue is not guaranteed to be time-ordered. If there are two valid system metadata changes resulting in two tasks and the second change hits the index first, the earlier task should be rejected, as its changes are out of date.</p>
Infrastructure - Task #8098 (Closed): Token-based authentication fails with LE CN certshttps://redmine.dataone.org/issues/80982017-05-21T17:06:46ZChris Jonescjones@nceas.ucsb.edu
<p>When trying to call @MN.create()@ on my local Metacat setup (which points to the production CN environment for ORCID authentication), I'm getting an @InvalidToken@ error:<br>
<br>
<?xml version="1.0" encoding="UTF-8"?><br>
Session is required to WRITE to the Node.<br>
</p>
<p>This is odd because I recently logged in via ORCID, so it looked like a token verification issue. In the Metacat log, I see:</p>
<p>metacat 20170521-10:35:07: [WARN]: Could not use public key to verify provided token: eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJodHRwOlwvXC9vcmNpZC5vcmdcLzAwMDAtMDAwMi04MTIxLTIzNDEiLCJmdWxsTmFtZSI6IkNocmlzdG9waGVyIEpvbmVzIiwiaXNzdWVkQXQiOiIyMDE3LTA1LTIxVDE1OjU3OjA2LjEyOCswMDowMCIsImNvbnN1bWVyS2V5IjoidGhlY29uc3VtZXJrZXkiLCJleHAiOjE0OTU0NDcwMjYsInVzZXJJZCI6Imh0dHA6XC9cL29yY2lkLm9yZ1wvMDAwMC0wMDAyLTgxMjEtMjM0MSIsInR0bCI6NjQ4MDAsImlhdCI6MTQ5NTM4MjIyNn0.dynDbRKqIuI1bXzPYlHfW7aFcrl2J7O8ZWqxS_2DHBotx4AqX_hbxuRrlQ_9s-V1mRJupyxkYxW3EWkLcoMUQNTuyMLGpV53GPoGdBjkTEd407GU-yxv_G3cmmSovXSLj6AAjeKJ8KHBt4y6JtgqR2isf5YGoM18CwM-IZV3nJVPBMZpNMPhYSWJeaeD2u02duKCpcy7L-XD_OCLJdzHjtjyFqqbHvqGyZIPqc9Kp_JTuTmlYaAZe9JiLcjHnyaOeHMGCEkmOekiRA_wh6DtnBLKyCczBjNg0kirxMk27abjAxt-ckhKfrCT6dnXbd1lCLNnxVYiJj5wztNOGH492T3nyaSQGROnSQd6cxB3pPAiwW7AOR34MPNJlNv_r-3WbwThDeOOtrMSvfZtYGv6Mn_i0-d1yjccRDzZeXdRS0P91GYfdK2lfog1lhiPuec3gD4V4plNJR3wKSSMhgjikH6igCB5I7C5n9Ye5vSeyWW9ApwLogfbEUc3xKgiCgj1jtED4L7E3WgUvtWxsyqMMtaEAJGvRHlGPPShD3xHPsm6ltCVrU1arLXneuGa0R7M-GgzMk0z5HdRE2bD2agu5WuN-w5-w9W6jwrzgI4wM7v8KiJYxeM332nx4f2BF6ArFJ2K-DxlpgmdK6bkPTtL7H-uj5digXvBoHFYZAJF49c</p>
<p>After grabbing the public certificate from the production CN:</p>
<p>-----BEGIN CERTIFICATE-----<br>
MIIFQzCCBCugAwIBAgISAxPSoq7BM7aFc1VzgyTJkz3wMA0GCSqGSIb3DQEBCwUA<br>
MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD<br>
ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0xNzA1MTcxMjI5MDBaFw0x<br>
NzA4MTUxMjI5MDBaMBkxFzAVBgNVBAMTDmNuLmRhdGFvbmUub3JnMIIBIjANBgkq<br>
hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtp++UWPu0Zm4gIs01F+LE94i4eExI+UX<br>
82DIB3Xn93FW4IgDTsjEfXCB3AHggdx6GnExbDzu/iXn+K3LiW6QaeasG47XOeup<br>
JjpmJqDROAJvLy1GpgrFeNxEe5F6xljPcAxUH/W/NkoHAem7wMatRNA53f6JkMVd<br>
sKXAYPOdKUOqhQ9QRMqEFIPImt+SHfvxUkQyL4g+1taQ5XYDu5zwF5+k77ZRre+o<br>
RVR9gHdbdlvLLQYP9eGJdi+nmFFTrEuXIklB8SQi6yvck0p6nR2sjmxFlnaLTe7Z<br>
iaVWaA1vvwvwgG27Q2iMcnAG+JXQDe7Jd1YIuXUW7vVYyGl4ONbp3QIDAQABo4IC<br>
UjCCAk4wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEF<br>
BQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBSyQkmQUHHO3EkItWseuA3L6vg8<br>
1DAfBgNVHSMEGDAWgBSoSmpjBH3duubRObemRWXv86jsoTBwBggrBgEFBQcBAQRk<br>
MGIwLwYIKwYBBQUHMAGGI2h0dHA6Ly9vY3NwLmludC14My5sZXRzZW5jcnlwdC5v<br>
cmcvMC8GCCsGAQUFBzAChiNodHRwOi8vY2VydC5pbnQteDMubGV0c2VuY3J5cHQu<br>
b3JnLzBcBgNVHREEVTBTghRjbi1vcmMtMS5kYXRhb25lLm9yZ4IVY24tdWNzYi0x<br>
LmRhdGFvbmUub3JnghRjbi11bm0tMS5kYXRhb25lLm9yZ4IOY24uZGF0YW9uZS5v<br>
cmcwgf4GA1UdIASB9jCB8zAIBgZngQwBAgEwgeYGCysGAQQBgt8TAQEBMIHWMCYG<br>
CCsGAQUFBwIBFhpodHRwOi8vY3BzLmxldHNlbmNyeXB0Lm9yZzCBqwYIKwYBBQUH<br>
AgIwgZ4MgZtUaGlzIENlcnRpZmljYXRlIG1heSBvbmx5IGJlIHJlbGllZCB1cG9u<br>
IGJ5IFJlbHlpbmcgUGFydGllcyBhbmQgb25seSBpbiBhY2NvcmRhbmNlIHdpdGgg<br>
dGhlIENlcnRpZmljYXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL2xldHNlbmNy<br>
eXB0Lm9yZy9yZXBvc2l0b3J5LzANBgkqhkiG9w0BAQsFAAOCAQEAJo/aaCo0NweP<br>
prHz+9Ko39xZ/Y6kum0ZOSw6BFM8zgkOOd1R0rbc53j09yKDi3V+MKd5rXfISNsp<br>
LKBVe/R8HH/rglYUhMTBBizGsEdyPE4n5I3ml4RyOVmC1SpDPUzH0CAeSLkzBpBV<br>
WVIfEwl641GtT0hBcwVjMlDYywrvSHv4mifVLd/2ZTSYillrhQzQySKb9g7jbEld<br>
LHY1WoIU0E5XgQJq3b6Vhb5dXVkHsDfwPHNpJA5fVCVYoKazo+xSNBP757ta/ix4<br>
e9CbRsQQ0TgEsuUAOa9lh9+O8uAL5zkZ4kwZCLypxbkZ8/YYOCMGMtGz4632J7VF<br>
Ozukfk41bw==<br>
-----END CERTIFICATE-----</p>
<p>and trying to verify the token with this certificate, it fails. </p>
<p>However, it verifies correctly with the old CN certificate:<br>
<br>
-----BEGIN CERTIFICATE-----<br>
MIIFrTCCBJWgAwIBAgICbkowDQYJKoZIhvcNAQELBQAwRzELMAkGA1UEBhMCVVMx<br>
FjAUBgNVBAoTDUdlb1RydXN0IEluYy4xIDAeBgNVBAMTF1JhcGlkU1NMIFNIQTI1<br>
NiBDQSAtIEczMB4XDTE0MTEwMzEyNTMyNFoXDTE3MDUyMDIxNDU0OVowgZExEzAR<br>
BgNVBAsTCkdUMzkwMjU2MTcxMTAvBgNVBAsTKFNlZSB3d3cucmFwaWRzc2wuY29t<br>
L3Jlc291cmNlcy9jcHMgKGMpMTIxLzAtBgNVBAsTJkRvbWFpbiBDb250cm9sIFZh<br>
bGlkYXRlZCAtIFJhcGlkU1NMKFIpMRYwFAYDVQQDDA0qLmRhdGFvbmUub3JnMIIC<br>
IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAzZaa/tslwA/CJ6Wqfzl72TrF<br>
/8IurHHrfzmme/B2dSUt0+zDfdfXWe7p6pZ4yJp95Kk34cf0EFWgFJ5Nc1gyXJUh<br>
Ht6IVweDDFrExeNPsNbI5DLFdUJ5ZfNhWrqu2C4kdeRfHqxOvI0w6XEfdZ4yI3QC<br>
zfx5EtsoFEXpqK5Xe3r5KEnXVsPq6azerVqvq2UqhPa0EYJA8/CVJiQ0CRQl+w9x<br>
Mh6GBvHUXqCHBPlRPIY7QomI+3Cx8gYgcLCCEcHVgzU05zQQRwdtIqjENq6CubH9<br>
UTMiKS81CFJbAVrKetDRI3bNGIcEEpjV1XC28OOWXNc9fXXAK3fvVFVl2tuzYFn0<br>
ROmRrtiz4+jXC7mp7/fTb5ekTeenKyoVA5UicbIHM1PPQeTwcHUH7CxybJVheGAo<br>
7wwzqrxin3LMMyn56QBXqB81qL+iMJ+ZBHXxiS5V6g4W1ag3VOtDvyRtN1QGB6J2<br>
enOTBOHNwr9bHuJcVPx1dYd6YjZD3LQbyJZyVtYHalnlCXGjLCxs9B2uL4MBllb5<br>
N++ouBiujO5ww6Ht+MgOq/gbahx9WlJCs5xXLy8Hf+FfjUBZXDdkvLwa36FWktZa<br>
ibbqqeBBq9IaW0gUNNmhYs3SB8J7JICVflUIp7e7wy7cXBJHpkATZKAuHVnqJ8ZT<br>
83YekoQFyxpcqB2fmRkCAwEAAaOCAVYwggFSMB8GA1UdIwQYMBaAFMOc8/zTRgg0<br>
u85Gf6B8W/PiCMtZMFcGCCsGAQUFBwEBBEswSTAfBggrBgEFBQcwAYYTaHR0cDov<br>
L2d2LnN5bWNkLmNvbTAmBggrBgEFBQcwAoYaaHR0cDovL2d2LnN5bWNiLmNvbS9n<br>
di5jcnQwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEF<br>
BQcDAjAlBgNVHREEHjAcgg0qLmRhdGFvbmUub3JnggtkYXRhb25lLm9yZzArBgNV<br>
HR8EJDAiMCCgHqAchhpodHRwOi8vZ3Yuc3ltY2IuY29tL2d2LmNybDAMBgNVHRMB<br>
Af8EAjAAMEUGA1UdIAQ+MDwwOgYKYIZIAYb4RQEHNjAsMCoGCCsGAQUFBwIBFh5o<br>
dHRwczovL3d3dy5yYXBpZHNzbC5jb20vbGVnYWwwDQYJKoZIhvcNAQELBQADggEB<br>
ABcvSyNwX1jHZ7HRX5Lzcua0Q4//wc5KCBvPgPrbr3bGSi3+t+Rc4ZagIUxFWSd1<br>
uZ+guQ4lywhQXGOXh7dH1SPljPOwZ9VPdhJMPW/woaQ0ndakLvW0OBIgyyqIcJ57<br>
8e6DKzZ0jd97xmXYAa7iMhCxL2lpXzDQMH5k8XhENHcjMXfVitkqmIS2Wfi1rEMK<br>
phszml9yRABtx+X0z/4/xmNZ2PrNApqmqVD2DnY1MgJNHga/KmPX/6VZ+NEszudP<br>
rvrD5hQvAjkJA+5kgqX31w98ggfXg4oxQo8AhKrHWnhI52SoWT1BOwSGDRpgRW/n<br>
1AdVxT9TIoHXbhf6+c8fWOU=<br>
-----END CERTIFICATE-----</p>
<p>So, effectively, the @d1_cn_portal@ component is still using the old RapidSSL certificate to sign tokens, but (I think) on MNs that have recently been restarted and grab the most recent CN certificate for verification purposes, the get the new LE certificate, and so can't verify incoming tokens signed by the CN. My guess is that this is going to be problematic for other MNs that go through a reboot and or restart and rely on the CN signing tokens. Looking at the @portal.properties@ file on the cn, I see that it is indeeed still pointing to the old certificate and key:<br>
<br>
cn.server.publiccert.filename=/etc/ssl/certs/_.dataone.org.crt<br>
cn.server.privatekey.filename=/etc/ssl/private/dataone_org.key</p>
<p>So, in the short term, we need to plan to re-configure @portal.properties@ on the production CNs to use the new Let's Encrypt certificates for token signing:<br>
<br>
cn.server.publiccert.filename=/etc/letsencrypt/live/cn.dataone.org/fullchain.pem<br>
cn.server.privatekey.filename=/etc/letsencrypt/live/cn.dataone.org/privkey.pem</p>
<p>However, the @fullchain.pem@ includes the intermediate CA certs as well, and I don't know if @CertificateManager.loadCertificateFromFile()@ handles multiple certificates in a file (i.e. does it use the first found, last found, etc?). We need to determine this before making the properties change, but also before other production MNs get rebooted and begin to fail authentication for clients.</p>
<p>Once tested, for the long term, we need to update the portal properties in the buildout to make the changes permanent. We may also need to add some logic for ensuring the @/etc/letsencrypt@ files have the correct permissions as Dave pointed out.</p>
Infrastructure - Story #7939 (Rejected): Indexing is too slow, especially with large packageshttps://redmine.dataone.org/issues/79392016-11-25T19:03:24ZDave Vieglaisdave.vieglais@gmail.com
<p>It appears that the indexing process is far too slow to keep up with content additions and changes. Since the version 2.3 upgrade which includes support for multiple indexing threads, the performance appears improved, but it falls far short of what is needed to provide reasonable currency.</p>
<p>In particular, it appears that large resource maps such as those provided by the ARCTIC node are very slow to evaluate.</p>
<p>Some optimization may be possible without major refactoring of the indexing process.</p>
<p>A few possible options:</p>
<ol>
<li><p>Check that changes to properties such as ownership do not trigger an entire re-index of the package. If permissions change, then there is no need to reindex the entire package since other properties are unchanged. This should be in place now since content is immutable, and only mutable metadata fields should be updated.</p></li>
<li><p>Dedicate a single thread to resource map processing, expanding to more threads when there is no backlog of other content. This would allow efficient processing of content on which the resource map indexing may depend.</p></li>
<li><p>Refactor the index so that resource maps may be processed independently, without the need for all other objects to be loaded and processed.</p></li>
<li><p>Refactor the indexing of resource maps so that a partially processed resource map is persisted so that processing may continue as content becomes available rather than starting from scratch each time.</p></li>
</ol>
Infrastructure - Bug #7919 (New): unloadable system metadata in CNs by Hazelcasthttps://redmine.dataone.org/issues/79192016-10-26T16:22:56ZRob Nahfrnahf@epscor.unm.edu
<p>Looking through the metacat logs, I found a lot of instances where the HzSystemMetadataMap could not load system metadata for particular pids. Most had dryad in the pid (~1200), but another 130 are from elsewhere.</p>
<p>a random sample showed that it couldn't be retrieved via /meta although the pid could be retrieved from the Dryad MN.</p>
<p>This appears to be another type of half-created content on the CN.</p>
<p>rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep -v dryad | wc -l<br>
139<br>
rnahf@cn-ucsb-1:~$ grep 'could not load system metadata' /var/metacat/logs/metacat.log | cut -c60- | sort | uniq | grep dryad | wc -l<br>
1216</p>
Infrastructure - Bug #4674 (New): Ask Judith, Mike and Virgina Perez.2.1 to obsolete those pids w...https://redmine.dataone.org/issues/46742014-03-31T18:02:41ZJing Taotao@nceas.ucsb.edu
<p>doi:10.5063/AA/Virginia Perez.2.1<br>
judith botha.1.1<br>
judith botha.2.1<br>
judith kruger.1.1<br>
judith kruger.2.1<br>
judith kruger.3.1<br>
judith kruger.4.1<br>
judith kruger.5.1<br>
doi:10.6085/AA/ SHLX00_XXXITV2XLSR03_20111128.40.1 (PISCO)</p>
Infrastructure - Task #3676 (Closed): design proposal for archivehttps://redmine.dataone.org/issues/36762013-03-20T23:00:47ZRob Nahfrnahf@epscor.unm.edu
<p>Design an implementable solution that:</p>
<p>1) allows non-current package relationships to be traversable<br>
2) does the right thing regarding discovery and non-current items<br>
3) compatible with mutability requirements</p>
Infrastructure - Bug #3675 (New): package relationships not available for archived objectshttps://redmine.dataone.org/issues/36752013-03-20T19:12:28ZRob Nahfrnahf@epscor.unm.edu
<p>Currently, records for obsoleted items are maintained in the solr index so its resourceMap, documents, documentedBy relationships are available, and people can "investigate the past". However, those same relationships are not available for archived items, leading to an incomplete solution for this use case (accessing package relationships of out-of-date content).</p>
<p>Archive is used to limit discoverability, but it also eliminates the ability to navigate the package relationships. </p>
<p>Note: archive is intended to be used when the owner does not want to update the object, but simply remove it. However, nothing prevents the owner from archiving obsoleted content. So, in fact, the ability to navigate the package relationships of out-of-date content cannot be guaranteed, and is subject to the individual data management practices of content owners. </p>