DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2019-03-14T17:11:26ZDataONE Tasks
Redmine Infrastructure - Story #8779 (New): ForesiteResourceMap performance issuehttps://redmine.dataone.org/issues/87792019-03-14T17:11:26ZRob Nahfrnahf@epscor.unm.edu
<p>Profiling reveals that much time is spent in IndexVisibilityDelegate, and it seemingly is called twice unnecessarily, first in _init, second in getAllResourceIDs().</p>
<p>This class in general is not well documented and has some confusing traversal code, so it is difficult to assess what exactly is going on. It also seems to be a misleading encapsulation of data, in that it attempts to filter out resource map members based on current system metadata properties (archived or not), but that's not mentioned at all in the sparse javadocs.</p>
<p>the code needs to be reviewed to make sure no unnecessary calls are made.<br>
If resource map checking (for completeness) is not going to be done anymore, this class probably should be deprecated or removed.</p>
Infrastructure - Bug #8735 (In Progress): NPE in IndexTask causes indexing job to failhttps://redmine.dataone.org/issues/87352018-10-18T18:05:44ZRob Nahfrnahf@epscor.unm.edu
<p>the isArchived() method calls a method that can return null, and doesn't check for null values before using it.</p>
<p>(IndexTask is in d1_cn_common component)</p>
Infrastructure - Bug #8724 (New): index out of bounds error in PortalCertificateManagerhttps://redmine.dataone.org/issues/87242018-10-02T19:11:41ZRob Nahfrnahf@epscor.unm.edu
<p>noticed in DEV logs:</p>
<pre>[ WARN] 2018-10-02 05:19:01,700 (PortalCertificateManager:getSession:308) 1
java.lang.ArrayIndexOutOfBoundsException: 1
at org.dataone.portal.PortalCertificateManager.getSession(PortalCertificateManager.java:305)
at org.dataone.cn.rest.v1.IdentityController.verifyAccount(IdentityController.java:542)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:176)
at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:436)
at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:424)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882)
at org.springframework.web.servlet.FrameworkServlet.doPut(FrameworkServlet.java:800)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:649)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.dataone.cn.rest.PortalCertificateFilter.doFilter(PortalCertificateFilter.java:82)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.dataone.cn.rest.ServiceDisableFilter.doFilter(ServiceDisableFilter.java:78)
</pre>
<p>The exception appears to be handled and logged, but the code should anticipate the session subject to be single-worded. ("public") for example, and not log an error with stacktrace.</p>
Infrastructure - Task #8703 (New): test the cleaned up indexer in DEVhttps://redmine.dataone.org/issues/87032018-09-24T18:05:50ZRob Nahfrnahf@epscor.unm.edu
<p>test the indexing code in DEV, after removing messaging and dependency upgrades. </p>
Infrastructure - Story #8702 (New): Indexing Refactor Strategyhttps://redmine.dataone.org/issues/87022018-09-21T22:42:48ZRob Nahfrnahf@epscor.unm.edu
<p>Indexing is non-performing and has some inconsistency problems.</p>
<p>A solution was developed that addresses the main issues, and involves the creation of a separate solr core for relationships (the resource maps). Initially, the solution will create the separate core as a behind the scenes reference for the main search index. Relationships (resource_map, documents, isDocumentedBy) will still be copied into the main search record.</p>
<p>Additionally, archived objects will not be removed from the index, but the field archived will be added to the schema.</p>
<p>The new logic for processing resource maps and archiving objects should remove many of the inefficient checks that cause records to be reindexed.</p>
<p>The main phases for development will be:</p>
<ol>
<li>refactor out the custom solr client for use of the standard org.apache.solrj-client.<br></li>
<li>migrate the schema to include archived field & introduce relationships core. Refactor the resourcemap subprocessor to use it, and trigger relationship tasks.</li>
<li>refactor the delete subprocessor (for archived records) & add the search handler.</li>
</ol>
Infrastructure - Task #8700 (Closed): downgrade depedencies in indexing trunk to remain consisten...https://redmine.dataone.org/issues/87002018-09-20T20:15:17ZRob Nahfrnahf@epscor.unm.edu
<p>while it would be great to keep upgraded dependencies in trunk code, upgrading solr and solrj past 5.2.1 causes issues outside the indexing stack of components.</p>
<p>At question were: <br>
solr (core)<br>
solrj (client)<br>
spring (everything)<br>
httpClient<br>
junit</p>
<p>In the end, I needed to downgrade solrj to 5.2.1 and httpClient to 4.3.3 due to incompatibilities between solr and solrj. the solr test framework uses solrj client, and was causing ClassNotFound exceptions in d1_cn_index_processor tests. </p>
<p>Other than that, d1_cn_common, d1_cn_index_common, and d1_cn_index_generator were all happy with solr 7.1.0, and httpClient 4.5.3.=</p>
Infrastructure - Bug #8571 (In Progress): IndexTool can't index a data objecthttps://redmine.dataone.org/issues/85712018-04-20T22:37:14ZJing Taotao@nceas.ucsb.edu
<p>When I indexed a data object, it rejected to index and threw an exception:<br>
=====Unable to find the object path for id: <a href="https://pasta.lternet.edu/package/data/eml/knb-lter-arc/20048/1/dddf0ae9a1920ddbd9f2c63ccaae1774">https://pasta.lternet.edu/package/data/eml/knb-lter-arc/20048/1/dddf0ae9a1920ddbd9f2c63ccaae1774</a>. So it will be ignored for reindexing.</p>
<p>The data objects don't store on CN, so they always don't have a object path.</p>
Infrastructure - Story #8525 (In Progress): timeout exceptions thrown from Hazelcast disable sync...https://redmine.dataone.org/issues/85252018-03-27T22:36:54ZRob Nahfrnahf@epscor.unm.edu
<p>Very occasionally, synchronization disables itself when RuntimeExceptions bubble up. The most common of these is when the Hazelcast client seemingly disconnects, or can't complete an operation, and a java.util.concurrent.TimeoutException is thrown.</p>
<p>These are usually due to network problems, as evidenced by timeout exceptions appearing in both the Metacat hazelcast-storage.log files as well as d1-processing logs.</p>
<p>Temporary problems like this should be recoverable, and so a retry or bypass for those timeouts should be implemented. It's not clear whether or not a new HazelcastClient should be instantiated, or whether the same client is still usable. (Is the client tightly bound to a session, or does it recover?) If a new client is needed, preliminary searching through the code indicates that refactoring the HazelcastClientFactory.getProcessingClient() method is only used in a few places, and the singleton behavior it uses can be sidestepped by removing the method and replacing it with a getLock() wrapper method (that seems to be the dominant use case for it). See the newer SyncQueueFacade in d1_synchronization for guidance on that. If the client is never exposed, it can be refreshed as needed.</p>
<pre>root@cn-unm-1:/var/metacat/logs# grep FATAL hazelcast-storage.log.1
[FATAL] 2018-03-27 03:15:19,380 (BaseManager$2:run:1402) [64.106.40.6]:5701 [DataONE] Caught error while calling event listener; cause: [CONCURRENT_MAP_CONTAINS_KEY] Operation Timeout (with no response!): 0
</pre><pre>[ERROR] 2018-03-27 03:15:19,781 [ProcessDaemonTask1] (SyncObjectTaskManager:run:84) java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent
.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.dataone.cn.batch.synchronization.SyncObjectTaskManager.run(SyncObjectTaskManager.java:76)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException: [CONCURRENT_MAP_REMOVE] Operation Timeout (with no response!): 0
at com.hazelcast.impl.ClientServiceException.readData(ClientServiceException.java:63)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:104)
at com.hazelcast.nio.Serializer$DataSerializer.read(Serializer.java:79)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:121)
at com.hazelcast.nio.AbstractSerializer.toObject(AbstractSerializer.java:156)
at com.hazelcast.client.ClientThreadContext.toObject(ClientThreadContext.java:72)
at com.hazelcast.client.IOUtil.toObject(IOUtil.java:34)
at com.hazelcast.client.ProxyHelper.getValue(ProxyHelper.java:186)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:146)
at com.hazelcast.client.ProxyHelper.doOp(ProxyHelper.java:140)
at com.hazelcast.client.QueueClientProxy.innerPoll(QueueClientProxy.java:115)
at com.hazelcast.client.QueueClientProxy.poll(QueueClientProxy.java:111)
at org.dataone.cn.batch.synchronization.type.SyncQueueFacade.poll(SyncQueueFacade.java:231)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:131)
at org.dataone.cn.batch.synchronization.tasks.SyncObjectTask.call(SyncObjectTask.java:73)
</pre> Infrastructure - Story #8504 (New): Support creation of data citation record from solr recordhttps://redmine.dataone.org/issues/85042018-03-19T21:53:13ZDave Vieglaisdave.vieglais@gmail.com
<p>The goal of this story is to ensure that elements in the solr search schema are available and appropriately populated to support generation of DataCite version 4.x or later records.</p>
<p>By ensuring support for this schema, it can also be asserted that suitable citation metadata can be provided in landing pages and other renderings of content provided by DataONE.</p>
<p>Resources:</p>
<ul>
<li><a href="https://schema.datacite.org/meta/kernel-4.1/" class="external">DataCite Schema version 4</a></li>
<li><a href="http://indexer-documentation.readthedocs.io/en/latest/generated/solr_schema.html" class="external">DataONE solr Search fields</a></li>
<li><a href="https://rd-alliance.org/group/data-citation-wg/outcomes/data-citation-recommendation.html" class="external">RDA Data Citation Recommendations</a></li>
</ul>
Infrastructure - Story #8363 (New): indexer shutdown generates index taskshttps://redmine.dataone.org/issues/83632018-02-12T21:42:22ZRob Nahfrnahf@epscor.unm.edu
<p>Seen in STAGE, somehow the index processor generated about 15k tasks (after processing 215k tasks over the weekend) during a service stop. It also created about 12.5 failures. Before trying to stop services, this the status of postgres:</p>
<pre>d1-index-queue=# select status, count(*) from index_task group by status;
status | count
------------+-------
NEW | 5
FAILED | 1659
IN PROCESS | 367
(3 rows)
</pre>
<p>Execution of <code>/etc/init.d/d1-index-task-processor stop</code> timed out.<br>
I performed <code>/etc/init.d/d1-index-task-generator stop</code> successfully, getting an <code>[OK]</code><br>
then I performed <code>/etc/init.d/d1-processing stop</code> on UCSB, also getting an '<code>[OK]</code></p>
<p>examination of the indexing log file a couple minuted later showed this:</p>
<pre>[ INFO] 2018-02-12 20:36:08,975 (IndexTaskProcessor:logProcessorLoad:245) new tasks:0, tasks previously failed: 1661
[ INFO] 2018-02-12 20:36:09,361 (IndexTaskProcessor:processFailedIndexTaskQueue:226) IndexTaskProcessor.processFailedIndexTaskQueue with size 0
[ WARN] 2018-02-12 20:36:09,361 (IndexTaskProcessorJob:execute:58) processing job [org.dataone.cn.index.processor.IndexTaskProcessorJob@515de84e] finished execution of index task processor [org.dataone.cn.index.processor.IndexTaskProcessor@2062
1d44]
[ WARN] 2018-02-12 20:36:26,571 (IndexTaskProcessorScheduler:stop:99) stopping index task processor quartz scheduler [org.dataone.cn.index.processor.IndexTaskProcessorScheduler@103bbd22] ...
[ INFO] 2018-02-12 20:36:26,572 (QuartzScheduler:standby:572) Scheduler QuartzScheduler_$_NON_CLUSTERED paused.
[ INFO] 2018-02-12 20:36:26,572 (IndexTaskProcessorScheduler:stop:111) Scheuler.interrupt method can't succeed to interrupt the d1 index job and the static method IndexTaskProcessorJob.interruptCurrent() will be called.
[ WARN] 2018-02-12 20:36:26,572 (IndexTaskProcessorJob:interruptCurrent:92) IndexTaskProcessorJob class [1806183035] interruptCurrent called, shutting down processor [org.dataone.cn.index.processor.IndexTaskProcessor@20621d44]
[ WARN] 2018-02-12 20:36:26,573 (IndexTaskProcessor:shutdownExecutor:952) processor [org.dataone.cn.index.processor.IndexTaskProcessor@20621d44] Shutting down the ExecutorService. Will allow active tasks to finish; will cancel submitted tasks
and return them to NEW status, wait for active tasks to finish, then return any remaining task not yet submitted to NEW status....
[ WARN] 2018-02-12 20:36:26,573 (IndexTaskProcessor:shutdownExecutor:955) ...1.) closing ExecutorService to new tasks...
[ WARN] 2018-02-12 20:36:26,574 (IndexTaskProcessor:shutdownExecutor:957) ...2.) cancelling cancellable futures...
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:958) ...number of futures: 591344
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:959) ... number of tasks in futures map: 591344
</pre>
<p>15 minutes or so later, the log showed this:</p>
<pre>[ WARN] 2018-02-12 20:36:26,573 (IndexTaskProcessor:shutdownExecutor:955) ...1.) closing ExecutorService to new tasks...
[ WARN] 2018-02-12 20:36:26,574 (IndexTaskProcessor:shutdownExecutor:957) ...2.) cancelling cancellable futures...
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:958) ...number of futures: 591344
[ WARN] 2018-02-12 20:36:26,575 (IndexTaskProcessor:shutdownExecutor:959) ... number of tasks in futures map: 591344
[ WARN] 2018-02-12 20:52:30,811 (IndexTaskProcessor:shutdownExecutor:988) ...number of (cancellable) runnables/tasks reset to new: 0
[ WARN] 2018-02-12 20:52:30,811 (IndexTaskProcessor:shutdownExecutor:989) ...number of (cancellable) runnables not mapped to tasks: 0
[ WARN] 2018-02-12 20:52:30,811 (IndexTaskProcessor:shutdownExecutor:990) ...number of uncancellable runnables: 591344 (completed or in process)
[ WARN] 2018-02-12 20:52:30,812 (IndexTaskProcessor:shutdownExecutor:993) ...3.) waiting (with timeout) for active futures to finish...
[ WARN] 2018-02-12 20:52:30,812 (IndexTaskProcessor:shutdownExecutor:998) ...4.) Reviewing remaining uncancellables to check for completion, returning incomplete ones to NEW status...
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1026) ...5.) Calling shutdownNow on the executor service.
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1028) ... .... number of runnables still waiting: 0
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1030) ...6.) returning preSubmitted tasks to NEW status...
[ WARN] 2018-02-12 20:52:30,835 (IndexTaskProcessor:shutdownExecutor:1031) ... .... number of preSubmitted tasks: 34735
[ INFO] 2018-02-12 20:52:30,835 (IndexTask:markNew:454) Even tough it was masked new, it is still considered failed for id testGetPackage_2017119234441164 since it was tried to many times.
[ERROR] 2018-02-12 20:52:30,891 (IndexTaskProcessor:shutdownExecutor:1038) ....... Exception thrown trying to return task to NEW status for pid: testGetPackage_2017119234441164
org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [org.dataone.cn.index.task.IndexTask] with identifier [13071797]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [org.dataone.cn.index.task.IndexTask#13071797]
...
[ INFO] 2018-02-12 20:54:19,618 (IndexTask:markNew:454) Even tough it was masked new, it is still considered failed for id P3_201622214921901 since it was tried to many times.
[ WARN] 2018-02-12 20:54:19,621 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid P3_201622214921901returned to NEW status.
[ WARN] 2018-02-12 20:54:19,623 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid resource_map_doi:10.5065/D6VD6WFPreturned to NEW status.
[ INFO] 2018-02-12 20:54:19,623 (IndexTask:markNew:454) Even tough it was masked new, it is still considered failed for id testGetPackage_NotAuthorized_201710605522454 since it was tried to many times.
[ WARN] 2018-02-12 20:54:19,626 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid testGetPackage_NotAuthorized_201710605522454returned to NEW status.
[ WARN] 2018-02-12 20:54:19,628 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid resource_map_urn:uuid:d3606ccb-2d50-4723-ae45-c0d01b817e48returned to NEW status.
[ WARN] 2018-02-12 20:54:19,631 (IndexTaskProcessor:shutdownExecutor:1036) ... preSubmittedTask for pid resource_map_doi:10.18739/A2165Freturned to NEW status.
[ WARN] 2018-02-12 20:54:19,631 (IndexTaskProcessor:shutdownExecutor:1041) ............7.) DONE with shutting down IndexTaskProcessor.
[ INFO] 2018-02-12 20:54:19,631 (IndexTaskProcessorScheduler:stop:113) The scheuler.interrupt method seems not interrupt the d1 index job and the static method IndexTaskProcessorJob.interruptCurrent() was called.
[ WARN] 2018-02-12 20:54:19,632 (IndexTaskProcessorScheduler:stop:128) Job scheduler [org.dataone.cn.index.processor.IndexTaskProcessorScheduler@103bbd22] finished executing all jobs. The d1-index-processor shut down sucessfully.============================================
</pre>
<p>but postgres yielded this:</p>
<pre>d1-index-queue=# select status, count(*) from index_task group by status;
status | count
--------+-------
NEW | 15367
FAILED | 14032
(2 rows)
</pre>
<p>indexer shutdowns are a stubborn problem...</p>
Infrastructure - Task #8308 (New): Review use of /node/subject in the node documentshttps://redmine.dataone.org/issues/83082018-02-06T20:05:53ZDave Vieglaisdave.vieglais@gmail.com
<p>The <a href="https://releases.dataone.org/online/api-documentation-v2.0/apis/Types.html#Types.Node" class="external">description</a> of <code>subject</code> in the Node structure is a bit confusing:</p>
<p>The Subject of this node, which can be repeated as needed. The Node.subject represents the identifier of the node that would be found in X.509 certificates used to securely communicate with this node. Thus, it is an X.509 Distinguished Name that applies to the host on which the Node is operating. When (and if) this hostname changes the new subject for the node would be added to the Node to track the subject that has been used in various access control rules over time.</p>
<p>Review the use of the node subject and adjust the documentation to clarify.</p>
Infrastructure - Story #8307 (New): Check node subject on node registration and subsequent callshttps://redmine.dataone.org/issues/83072018-02-06T20:04:39ZDave Vieglaisdave.vieglais@gmail.com
<p>The <code>/node/subject</code> entry of the node document should match the subject of the certificate used to register the node (unless the call is being made by a CN certificate).</p>
Infrastructure - Story #8234 (New): Use University of Kansas ORCID membership to support authenti...https://redmine.dataone.org/issues/82342018-01-09T02:00:28ZDave Vieglaisdave.vieglais@gmail.com
<p><a href="https://orcid.org/members/001G000001CAkZgIAL-university-of-kansas" class="external">KU is a premium ORCID member</a> as a member of the Greater Western Library Alliance (GWLA). As a result, KU has access to five ORCID API keys. One is currently in use for the KU DSpace instance.</p>
<p>Goal of this story is to leverage on of the remaining API keys to support ORCID authentication in the DataONE production environment.</p>
Infrastructure - Bug #8229 (Closed): cn.creates failing with "413: Request Entity Too Large" thro...https://redmine.dataone.org/issues/82292017-12-18T23:22:14ZRob Nahfrnahf@epscor.unm.edu
<p>While running sync from cn-orc-1, 18 cn.creates failed with the following exception. It turns out that the apache configuration was not controlled, and only cn-ucsb-1 had increased the SSL renegotiation size. The solution is to add the configuration to cn-ssl.conf in dataone-cn-os-core.</p>
<p>An actual error message:</p>
<p>[ERROR] 2017-12-18 20:30:12,542 (V2TransferObjectTask:call:269) Task-urn:node:CLOEBIRD-EOD_CLO_2016.eml - UnrecoverableException: EOD_CLO_2016.eml cn.createObject failed - ServiceFailure - 413: Request Entity Too Large: parser for deserializing HTML not written yet. Providing message body:<br>
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><br>
<br>
413 Request Entity Too Large<br>
<br>
Request Entity Too Large<br>
The requested resource/cn/v2/object<br>
does not allow request data with POST requests, or the amount of data provided in<br>
the request exceeds the capacity limit.<br>
<br>
Apache/2.4.7 (Ubuntu) Server at cn-orc-1.dataone.org Port 443</p>
Infrastructure - Story #8227 (In Progress): ExceptionHandler regurgitates long html pages into th...https://redmine.dataone.org/issues/82272017-12-13T21:19:23ZRob Nahfrnahf@epscor.unm.edu
<p>While useful to know what was returned in the error response when it was not the correct response, HTML pages can be verbose and include excessive markup that's not useful. Especially when a GMN MN is in debugging mode and there is a systematic error being returned (like during an authentication issue), these logged html pages can end up being 75% of the log files, and cause meaningful log lines from scrolling off the end of the log rotation.</p>
<p>An option should be provided to limit the amount of characters being returned in the ServiceFailure.</p>
<p>Options are to:<br>
1. eliminate the message body altogether<br>
2. truncate the message body<br>
3. only print the visible parts of the HTML (remove and elements)<br>
4. combination of 2 & 3</p>
<p>since a new feature, develop in trunk.</p>