DataONE Tasks: Issueshttps://redmine.dataone.org/https://redmine.dataone.org/favicon.ico2020-07-02T19:06:39ZDataONE Tasks
Redmine Infrastructure - Task #8865 (Closed): Configure dataone.org web server to redirect DataONE datase...https://redmine.dataone.org/issues/88652020-07-02T19:06:39ZBryce Mecummecum@nceas.ucsb.edu
<p>As scoped out in <a href="https://hpad.dataone.org/8h3o_7VPTIibo5xL9bz24w">https://hpad.dataone.org/8h3o_7VPTIibo5xL9bz24w</a>, we'd like to be able to referenced DataONE resources in a linked-open-data manner. This is useful because it forms the basis of building more interested things on top of them. However, those resources (e.g., Data Packages) don't currently have, subjectively, suitable IRIs, though they have a variety of URLs.</p>
<p>The ideas in the above proposal are multi-tiered but a good first start can be achieved immediately: Support a PIRI space for "Datasets" (DataONE Data Packages) by redirecting requests from their IRI form:</p>
<p><code>https://dataone.org/datasets/$ID</code></p>
<p>to their URL form:</p>
<p><code>https://search.dataone.org/view/$ID</code>.</p>
<p>Such a redirection can be achieved within our Apache configuration using <code>mod_rewrite</code> and a rule similar to:</p>
<pre>RewriteRule "^/datasets/(.+)$" "https://search.dataone.org/view/$1" [L,R]
</pre> CN REST - Bug #8860 (New): /token endpoint doesn't set a content-type and character encodinghttps://redmine.dataone.org/issues/88602020-02-29T01:00:11ZBryce Mecummecum@nceas.ucsb.edu
<p>On Firefox only, requests to the /portal/token endpoint (i.e., the one MetacatUI and other clients use to fetch their auth tokens, like <a href="https://cn.dataone.org/portal/token">https://cn.dataone.org/portal/token</a>) result in errors in the browser console.</p>
<p>When you access the URL via an XHR request, you see:</p>
<blockquote>
<p>XML Parsing Error: syntax error<br>
Location: <a href="https://cn-stage.test.dataone.org/portal/token">https://cn-stage.test.dataone.org/portal/token</a><br>
Line Number 1, Column 1:</p>
</blockquote>
<p>When you access the URL directly in Firefox:</p>
<blockquote>
<p>The character encoding of the plain text document was not declared. The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range. The character encoding of the file needs to be declared in the transfer protocol or file needs to use a byte order mark as an encoding signature.</p>
</blockquote>
<p>I had a hunch that this error would go away if the response simply had the <code>Content-Type</code> header set to <code>text/plain; charset=utf-8</code> so I spun up <code>mitmproxy</code>, made that edit to the intercepted response, and saw that the error does go away.</p>
<p>I think we should modify the portal code to set the <code>Content-Type</code> header like above so the error goes away.</p>
Infrastructure - Bug #8836 (Closed): CSS and JS not loading on ProvONE documentationhttps://redmine.dataone.org/issues/88362019-08-12T21:54:25ZBryce Mecummecum@nceas.ucsb.edu
<p>I was looking at <a href="http://jenkins-1.dataone.org/jenkins/view/Documentation%20Projects/job/ProvONE-Documentation-trunk/ws/provenance/ProvONE/v1/provone.html">http://jenkins-1.dataone.org/jenkins/view/Documentation%20Projects/job/ProvONE-Documentation-trunk/ws/provenance/ProvONE/v1/provone.html</a> today and Chrome and Safari are both upset with how its CSS and JS are set up and are refusing to load either. Steps to reproduce:</p>
<ol>
<li>Go to <a href="http://jenkins-1.dataone.org/jenkins/view/Documentation%20Projects/job/ProvONE-Documentation-trunk/ws/provenance/ProvONE/v1/provone.html">http://jenkins-1.dataone.org/jenkins/view/Documentation%20Projects/job/ProvONE-Documentation-trunk/ws/provenance/ProvONE/v1/provone.html</a></li>
<li>See no styles are loaded, hide/show examples buttons don't work. See errors in devtools about CSP errors.</li>
</ol>
Infrastructure - Bug #8822 (New): Account queries in STAGE-2 failing from web browserhttps://redmine.dataone.org/issues/88222019-06-19T01:44:19ZBryce Mecummecum@nceas.ucsb.edu
<p>Steps to reproduce:</p>
<ol>
<li>Visit <a href="https://search-stage-2.test.dataone.org/">https://search-stage-2.test.dataone.org/</a></li>
<li>Log in</li>
<li>Navigate to <a href="https://search-stage-2.test.dataone.org/data">https://search-stage-2.test.dataone.org/data</a></li>
<li>Observe an HTTP 500 in the Network pane of whichever browser you're using to <a href="https://search-stage-2.test.dataone.org/cn/v2/accounts/?query=%7BYOUR_DN%7D">https://search-stage-2.test.dataone.org/cn/v2/accounts/?query={YOUR_DN}</a></li>
</ol>
<p>Response body:</p>
<pre><?xml version="1.0" encoding="UTF-8"?>
<error detailCode="500" errorCode="500" name="ServiceFailure">
<description>Internal Server Error: The server encountered an unexpected condition which prevented it from fulfilling the request.</description>
</error>
</pre>
<p>Some notes:</p>
<ul>
<li>I notice this in multiple browsers</li>
<li>I notice this only when the request is issued from MetacatUI, not when I visit the URL in my browser or hit it with curl</li>
<li>I don't notice this error on search.dataone.org</li>
<li>MetacatUI on search.dataone.org doesn't even issue this request</li>
<li>MetacatUI on stage-2 is at v2.4.2 and on search is at 2.6.1</li>
</ul>
<p>It seems like a bug to me that we see a service failure but I'm not sure if this is a MetacatUI bug or an issue in the CN stack (i.e., Apache config or something) but I wanted to file it for someone to take a look.</p>
Infrastructure - Task #8820 (Closed): Add new DataONE Object format for HDF4/5 file formatshttps://redmine.dataone.org/issues/88202019-06-13T19:54:01ZBryce Mecummecum@nceas.ucsb.edu
<p>HDF 4 and 5 are efficient binary formats for data commonly used in science: <a href="https://en.wikipedia.org/wiki/Hierarchical_Data_Format">https://en.wikipedia.org/wiki/Hierarchical_Data_Format</a>, <a href="https://www.hdfgroup.org/solutions/hdf5/">https://www.hdfgroup.org/solutions/hdf5/</a>. I don't think we have a lot of content in this format, if any, but it's a pretty common format and a good one at that.</p>
<p>I did some research on MIME types and file extensions:</p>
<p>Re: MIME type:</p>
<blockquote>
<p>The recommended content is application/x-hdf5 for data in HDF5 or application/x-hdf for data in earlier versions.</p>
</blockquote>
<p><a href="https://www.hdfgroup.org/2018/06/citations-for-hdf-data-and-software/">https://www.hdfgroup.org/2018/06/citations-for-hdf-data-and-software/</a></p>
<p>Re: Extension,</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Hierarchical_Data_Format">https://en.wikipedia.org/wiki/Hierarchical_Data_Format</a> lists a few of the variants I've seen</li>
<li>The R hdf5 package uses the h5 extension (<a href="https://github.com/grimbough/rhdf5/tree/master/inst/testfiles">https://github.com/grimbough/rhdf5/tree/master/inst/testfiles</a>) so I went with this</li>
<li>.hdf4 and .hdf5 seem common too but h4/h5 seems a tad more common</li>
</ul>
<p><strong>Here are the details for each of the new formats:</strong></p>
<p><code>HDF4</code></p>
<ul>
<li>formatId: <code>application/x-hdf</code></li>
<li>formatName: Hierarchical Data Format version 4 (HDF4)</li>
<li>mediaType: <code>application/octet-stream</code></li>
<li>extension: h4</li>
</ul>
<p><code>HDF5</code></p>
<ul>
<li>formatId: <code>application/x-hdf5</code></li>
<li>formatName: Hierarchical Data Format version 5 (HDF5)</li>
<li>mediaType: <code>application/octet-stream</code></li>
<li>extension: h5</li>
</ul>
Infrastructure - Task #8817 (New): Configure sitemaps on the CNhttps://redmine.dataone.org/issues/88172019-06-06T23:52:23ZBryce Mecummecum@nceas.ucsb.edu
<p>Support for sitemaps landed last fall in Metacat: <a href="https://github.com/NCEAS/metacat/pull/1283">https://github.com/NCEAS/metacat/pull/1283</a>. Sitemaps are good for users but especially for search engines and DataONE's Search Catalog could benefit from having sitemaps enabled. A sitemap could help crawlers discover all of the datasets in DataONE. The CNs already run Metacat and should use Metacat's sitemaps ability to generate sitemaps for all content.</p>
<p>To enable sitemaps on the CNs, a few things seem to be needed. I'm not very familiar with how the CNs get built so I may be wrong or be missing things:</p>
<ul>
<li>Sitemaps rely on two properties in Metacat's <code>metacat.properties</code> file, which should have values: <code>sitemap.location.base=https://search.dataone.org/</code> and <code>sitemap.entry.base=https://search.dataone.org/view</code></li>
<li>The Apache config the CNs are built with need to serve the <code>sitemap_index.xml</code> and individual sitemaps from the Tomcat webapps dir. Metacat generates sitemaps in the <code>sitemaps</code> subfolder (e.g., <code>/usr/lib/tomcat8/webapps/metacat/sitemaps</code>). A <code>Directory</code> directive should work so long as filesystem permissions are set up for Apache to see the files.</li>
<li>We need a robots.txt that points at the sitemap index file at search.dataone.org which provides the entrypoint to <code>sitemap_index.xml</code></li>
<li>Metacat generates sitemaps with a recurring job mechanism that's internal to Metacat. AFAIK this job isn't turned on when Tomcat loads Metacat and a request has to get sent to the admin API which turns this job on as a side-effect. We might want to change this to reduce maintenance burden or chance of having stale sitemaps</li>
</ul>
<p>Dave nominated Jing for this work and has targeted this for the next CCI release. I'm not sure which that is so please select whichever one is appropriate.</p>
<p>Note this relates to <a href="https://redmine.dataone.org/issues/8693">https://redmine.dataone.org/issues/8693</a> which we've delayed because Google's crawler infrastructure has changed and DataONE is now visible by Google. Google staff have indicated we only need to send them a robots.txt that points to our sitemaps for them to begin crawling.</p>
Infrastructure - Decision #8774 (New): Add new CN format for MPEG-2 or update video/mpeg format t...https://redmine.dataone.org/issues/87742019-03-08T02:04:18ZBryce Mecummecum@nceas.ucsb.edu
<p>We already have this format:</p>
<pre><objectFormat>
<formatId>video/mpeg</formatId>
<formatName>MPEG-1 Video</formatName>
<formatType>DATA</formatType>
<mediaType name="video/mpeg"/>
<extension>mpg</extension>
</objectFormat>
</pre>
<p>It clearly says MPEG-1 Video but there is also an MPEG-2 format. It'd be useful to have a proper format ID to use for MPEG-2 video. The info for MPEG-2 is largely the same as for MPEG-1 (format type, mediaType, extension). Should we add a new format or just update the description of the existing one? We also have an MP4 format type so maybe that'd be a reason to just add another to cover MPEG-2.</p>
Infrastructure - Decision #8765 (Closed): Consider changing how BaseSolrFieldXPathTest workshttps://redmine.dataone.org/issues/87652019-02-13T00:36:55ZBryce Mecummecum@nceas.ucsb.edu
<p>Ran into a weird thing while expanding the <a href="https://repository.dataone.org/software/cicore/trunk/cn/d1_cn_index_processor/src/test/java/org/dataone/cn/index/SolrFieldXPathEmlTest.java">https://repository.dataone.org/software/cicore/trunk/cn/d1_cn_index_processor/src/test/java/org/dataone/cn/index/SolrFieldXPathEmlTest.java</a> to test an EML 2.2.0 doc.</p>
<p><code>SolrFieldXPathEmlTest</code> compares values extracted via various subprocessors to a set of expectations stored in a <code>HashMap<String, String>()</code> (of form <code><fieldName, expectedValue></code>). This data structured limits expectations to one value per field. Forever ago, in <code>r7985</code>,</p>
<pre>r7985 | sroseboo | 2012-03-23 12:12:53 -0800 (Fri, 23 Mar 2012) | 1 line
initial commit of search index support for parsing FGDC science metadata docs.
Index: src/test/java/org/dataone/cn/index/BaseSolrFieldXPathTest.java
</pre>
<p>support was added for testing multiple expectations for a single field by defining a convention of smushing multiple values into a single string, separated by two # characters (##). For example,</p>
<pre>eml210Expected.put("project", "Random Project Title##Another Random Project Title");
</pre>
<p>would test the <code>project</code> field for two values, "Random Project Title" and "Another Project Title" not a literal "Random Project Title##Another Random Project Title". I imagine this was picked because it's rare to see a ## in a metadata record which seems reasonable.</p>
<p>I ran afoul of this today because I wanted to test an expectation for a field with a # in it and I couldn't because it was being split when I didn't want it to be. Why did a single # break things when the convention above is a double #? Because <code>BaseSolrFieldXPathTest.java</code> splits the expectation string using <code>StringUtils.split</code> like this:</p>
<pre>StringUtils.split(expectedForField, "##") // Where expectedForField might equal "Random Project Title##Another Random Project Title"
</pre>
<p>According to <a href="https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/StringUtils.html#split-java.lang.String-java.lang.String-">https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/StringUtils.html#split-java.lang.String-java.lang.String-</a>, the second arg to <code>StringUtils.split</code>, <code>separatorChars</code>, is "the characters used as the delimiters, null splits on whitespace" which tells me we're using it wrong. I think the method we needed to use was <code>StringUtils.splitByWholeSeparator</code> which correctly splits only on double #.</p>
<p>I could just change the line of code and move on but that breaks a ton (98) of tests. Before I did that, I wanted to ask what others thought. I see a few routes:</p>
<ol>
<li>Change the code to correctly split only on ## and not # <em>and</em> update all the tests I break</li>
<li>Do 1 <em>and</em> use a different separator to something more future proof. I suggest "&&" because that'd be invalid in XML outside a CDATA section.</li>
<li>Change how the expectations get tested so field expectations can have a one to many relationship. This'd take some time so I'd opt for (1) or (2) instead</li>
</ol>
<p>Any preferences out there?</p>
Infrastructure - Task #8754 (New): Add EML 2.2 to CN formats listhttps://redmine.dataone.org/issues/87542018-12-19T00:54:05ZBryce Mecummecum@nceas.ucsb.eduInfrastructure - Task #8753 (New): Add support for EML 2.2 (indexing, view)https://redmine.dataone.org/issues/87532018-12-19T00:43:52ZBryce Mecummecum@nceas.ucsb.edu
<p>EML 2.2 is nearly released and, with it, comes changes to indexing and the view service. The key points are that EML now supports Markdown and semantic annotations but some more stuff was added too. See the in-progress What's New in EML 2.2.0 page <a href="https://github.com/NCEAS/eml/blob/BRANCH_EML_2_2/docs/eml-220info.md">https://github.com/NCEAS/eml/blob/BRANCH_EML_2_2/docs/eml-220info.md</a> for details.</p>
<p>I'll track these with sub-tasks but here's an overview:</p>
<p><strong>Formats</strong></p>
<p>We need to expand the list of formats to include EML 2.2.</p>
<p><strong>Indexing</strong></p>
<p>EML 2.2 requires adding new fields and changes to some existing fields. For example, abstracts in EML can now also include a <code>markdown</code> child element which we'll also want to store in Solr. We also want to add in support for semantic annotations and probably structured funding information since that's been requested so much.</p>
<p>Support for semantic annotations will likely use the same approach as our previous work on semantic search where incoming annotations were subject to materialization during the indexing process. To explain what that means, the idea is that, when an annotation for term X comes in, the index processor loads the relevant ontology, materializes the superclass hierarchy for the term, and stores the entire hierarchy in Solr. For EML records with multiple annotations, only the unique set needs to be stored.</p>
<p><strong>View service</strong></p>
<p>As mentioned above, EML 2.2 changes how some existing elements (e.g., abstract) work and adds some new ones (e.g., annotations. The existing EML 2 stylesheets will work for EML 2.2 because 2.2 is backwards compatible but we'll want to extend them to support what's new and changed in EML 2.2</p>
<p>Of note: EML now supports storing Markdown where previously only EML TextType (basically DocBook) was allowed. Our plan on Metacat/MetacatUI to support this is to render the Markdown on the client side which does involve some security concerns (see <a href="https://github.com/NCEAS/metacatui/issues/860">https://github.com/NCEAS/metacatui/issues/860</a> for tracking).</p>
Infrastructure - Decision #8616 (New): Consider expanding isotc211's indexing component's keyword...https://redmine.dataone.org/issues/86162018-06-15T00:13:04ZBryce Mecummecum@nceas.ucsb.edu
<p>From </p>
<p><a href="https://repository.dataone.org/software/cicore/trunk/cn/d1_cn_index_processor/src/main/resources/application-context-isotc211-base.xml">https://repository.dataone.org/software/cicore/trunk/cn/d1_cn_index_processor/src/main/resources/application-context-isotc211-base.xml</a></p>
<p>The current XPath for the <code>keyword</code> field pulls out:</p>
<pre>//gmd:identificationInfo/gmd:MD_DataIdentification/gmd:descriptiveKeywords/gmd:MD_Keywords/gmd:keyword/gmx:Anchor/text() |
//gmd:identificationInfo/gmd:MD_DataIdentification/gmd:descriptiveKeywords/gmd:MD_Keywords/gmd:keyword/gco:CharacterString/text()
</pre>
<p>ISO also defines <code>MD_DataIdentification/gmd:topicCategory</code> which is defined as "The main theme(s) of the dataset." and is required (recommended) when describing a dataset. It's conditional, and repeatable. An example from a PANGAEA doc is</p>
<pre>...
<ns0:topicCategory>
<ns0:MD_TopicCategoryCode>geoscientificInformation</ns0:MD_TopicCategoryCode>
</ns0:topicCategory>
</ns0:MD_DataIdentification>
</pre>
<p>I think it's improve recall to include in our keywords list. It appears to be a controlled vocabulary so we could even make more direct use of it. The controlled vocabulary appears to be (From the MI_Metadata workbook):</p>
<p>Domain: <br>
- farming<br>
- biota<br>
- boundaries<br>
- climatologyMeteorolgyAtmosphere<br>
- economy<br>
- elevation<br>
- environement<br>
- geoscientificInformation<br>
- health<br>
- imageryBaseMapsEarchCover<br>
- intelligenceMilitary<br>
- inlandWaters<br>
- location<br>
- oceans<br>
- planningCadastre<br>
- society<br>
- structure<br>
- transportation<br>
- utilitiesCommunicationgeoscientificInformation, health, imageryBaseMapsEarchCover, intelligenceMilitary, inlandWaters, location, oceans, planningCadastre, society, structure, transportation, utilitiesCommunication</p>
<p>Both NCEI and PANGAEA make use of this field in their ISO docs.</p>
Infrastructure - Bug #8215 (New): Consider how Subjects are compared (e.g. HTTP vs. HTTPS ORCID U...https://redmine.dataone.org/issues/82152017-11-07T01:41:51ZBryce Mecummecum@nceas.ucsb.edu
<p>I know this has come up a few times in the last few years and it has bitten us a lot in day-to-day operations at the Arctic Data Center. Some Subjects in DataONE authentication appear to be compared literally so these two subjects are not considered equivalent:</p>
<p><a href="http://orcid.org/0000-0002-0381-3766">http://orcid.org/0000-0002-0381-3766</a><br>
<a href="https://orcid.org/0000-0002-0381-3766">https://orcid.org/0000-0002-0381-3766</a></p>
<p>even though a reasonable spectator might consider them equivalent.</p>
<p>Where this causes trouble seems to be when System Metadata is authored by users, specifically the rightsHolder and accessPolicy portions, and what's in the System Metadata does not literally match the user's actual Subject. As an example, when a user logs in via ORCID, their DataONE Subject ends up being the <em>http</em> variant of their ORCID URI, so if they use the <em>https</em> variant of their ORCID URI in their System Metadata, API calls requiring read, write, and changePermission permission fail for them because the literal string comparison determines the two Subjects to non-equivalent.</p>
<p>I think this may only affect ORCIDs right now because Subjects such as LDAP DNs may already be compared in a string-insensitive fashion. Though I'm not sure on this point.</p>
<p>A few of us on the NCEAS dev team discussed this and we think it's fair to compare Subjects more intelligently, or at least be more aware of the semantic structure of the Subject string. This would have numerous benefits, such as:</p>
<ul>
<li>Prevent users from becoming hopelessly confused when they can't figure out why they can't read/write Objects (people often don't notice http vs https) which will make DataONE seem friendlier</li>
<li>Make DataONE authentication less dependent upon protocols such as HTTP/HTTPS, both of which my change (i.e., ORCID may disable HTTPS; HTTP(S) may be replaced by a future web protocol) and thus more future-proof</li>
</ul>
<p>This type of change would require changes in one or more software projects:</p>
<ul>
<li>DataONE Portal or libclient Java (I'm not entirely sure where this check is done right now)</li>
<li>Architecture documentation</li>
<li>(Potentially any/all) MN software stacks (At least a review would be needed)</li>
<li>(Potentially) Client tools (e.g., R, Python) (At least a review would be needed)</li>
</ul>
Infrastructure - Story #7859 (New): Add formatID for the STL 3d model file formathttps://redmine.dataone.org/issues/78592016-08-04T19:02:58ZBryce Mecummecum@nceas.ucsb.edu
<p>The STL file format is a domain standard file format for storing 3d models and is the most common way I've managed 3d models used while 3d printing. Given that 3d printing is seeing increased usage in the sciences, I would say this is a good candidate for inclusion in the controlled list of format ids.</p>
<p>Type: DATA<br>
Id: STL<br>
Name: StereoLithography File Format<br>
Media type: application/sla (unofficial)<br>
Extension: .stl</p>
<p>There is an ASCII form and a Binary form of this format. They don't see to be distinguished according to any standard. What do we do in this case?</p>
<p>References: <br>
- <a href="https://en.wikipedia.org/wiki/STL_(file_format)">https://en.wikipedia.org/wiki/STL_(file_format)</a><br>
- <a href="https://reference.wolfram.com/language/ref/format/STL.html">https://reference.wolfram.com/language/ref/format/STL.html</a></p>
DataONE API - Bug #7684 (New): Call to MNStorage.update() via REST API returns java.lang.StackOve...https://redmine.dataone.org/issues/76842016-03-21T23:07:39ZBryce Mecummecum@nceas.ucsb.edu
<p>I was trying to update an object via the REST API via cURL and forgot to enter the correct URL. The cURL command I used and response is:</p>
<p>$ curl -X PUT -H "Authorization: Bearer $TOKEN" -F "pid=resourceMap_doi:10.5065/D6G44NFV" -F "object=@object.xml" -F "sysmeta=@sysmeta.xml" -F "newPid=resourceMap_doi:10.5065/D6G44NFV_v3" $URL<br>
<?xml version="1.0" encoding="UTF-8"?><br>
java.lang.StackOverflowError<br>
</p>
<p>Where $URL was '<a href="https://arcticdata.io/metacat/d1/mn/v2/object">https://arcticdata.io/metacat/d1/mn/v2/object</a>' instead of '<a href="https://arcticdata.io/metacat/d1/mn/v2/object/resourceMap_doi:10.5065/D6G44NFV">https://arcticdata.io/metacat/d1/mn/v2/object/resourceMap_doi:10.5065/D6G44NFV</a>'</p>
<p>I expected to receive some sort of warning/error that I had forgotten to specify the URL properly for this call but instead saw a StackOverflowError.</p>
DataONE API - Bug #7578 (New): Fix 404 link to d1_instance_generator folder in documentationhttps://redmine.dataone.org/issues/75782016-01-08T22:01:20ZBryce Mecummecum@nceas.ucsb.edu
<p>In the MN API documentation for MNStorage.create (<a href="https://jenkins-ucsb-1.dataone.org/job/API%20Documentation%20-%20trunk/ws/api-documentation/build/html//apis/MN_APIs.html#MNStorage.create">https://jenkins-ucsb-1.dataone.org/job/API%20Documentation%20-%20trunk/ws/api-documentation/build/html//apis/MN_APIs.html#MNStorage.create</a>), I found a the following paragraph contains a broken link to d1_instance_generator:</p>
<blockquote>
<p>"The system metadata included with the create call must contain values for the elements required to be set by clients (see System Metadata). The system metadata document can be crafted by hand or preferably with a tool such as generate_sysmeta.py which is available in the d1_instance_generator Python package. See documentation included with that package for more information on its operation."</p>
</blockquote>
<p>The link to d1_instance_generator was to the SVN folder <a href="https://repository.dataone.org/software/cicore/trunk/d1_instance_generator">https://repository.dataone.org/software/cicore/trunk/d1_instance_generator</a> which is currently a 404. I think the folder moved to /d1_test_utilities_python/src/d1_test/instance_generator.</p>