diff --git a/README.md b/README.md index be4abbe..fe3ac1c 100644 --- a/README.md +++ b/README.md @@ -2,21 +2,37 @@ # Deconst documentation -Documentation for the *deconst* project: a continuous delivery pipeline for heterogenous documentation. It's also used to develop deconst itself, like some kind of documentation ouroboros. You can read the documentation itself at [deconst.horse](https://deconst.horse/). +Documentation for the *deconst* project: a continuous delivery +pipeline for heterogenous documentation. It's also used to develop +deconst itself, like some kind of documentation ouroboros. You can +read the documentation itself at +[deconst.horse](https://deconst.horse/). ## Building -To build this documentation standalone, use the [Deconst client](https://github.com/deconst/client). Use the Sphinx preparer and a clone of `https://github.com/deconst/deconst-docs-control` as the control repository. +To build this documentation standalone, use the [Deconst +client](https://github.com/deconst/client). Use the Sphinx preparer and a clone +of `https://github.com/deconst/deconst-docs-control` as the control repository. ### DNS and TLS -DNS entries for `deconst.horse`, `build.deconst.horse`, `staging.deconst.horse`, and `content.staging.deconst.horse` are managed by Cloud DNS entries in the "drgsites" account. They should be pointed to the appropriate load balancers. +DNS entries for `deconst.horse`, `build.deconst.horse`, `staging.deconst.horse`, +and `content.staging.deconst.horse` are managed by Cloud DNS entries in the +"drgsites" account. They should be pointed to the appropriate load balancers. -TLS certificates are currently retrieved from Let's Encrypt by a manual, downtime-inducing process. To reissue them: +TLS certificates are currently retrieved from Let's Encrypt by a manual, +downtime-inducing process. To reissue them: + +* Run `script/reissue` in your + [deconst/deploy](https://github.com/deconst/deploy) clone to reissue and + download the new certificates to `le_certificates/`. + +* Copy and paste the new certificate as the `ssl_key:` entry in + `credentials.yml`. + +* Use `script/encrypt` to encrypt the modified credentials file and commit and + push it to nexus-credentials. -* Run `script/reissue` in your [deconst/deploy](https://github.com/deconst/deploy) clone to reissue and download the new certificates to `le_certificates/`. -* Copy and paste the new certificate as the `ssl_key:` entry in `credentials.yml`. -* Use `script/encrypt` to encrypt the modified credentials file and commit and push it to nexus-credentials. * Invoke Ansible to roll it out: ```bash @@ -29,7 +45,8 @@ $ script/deploy \ # Deconst Dev Env in Kubernetes with Minikube -These instructions will prepare and submit the content and assets for this deconst documentation in a dev env in Kubernetes with Minikube. +These instructions will prepare and submit the content and assets for this +deconst documentation in a dev env in Kubernetes with Minikube. 1. If necessary, deploy the [presenter service](https://github.com/deconst/presenter#deconst-dev-env-in-kubernetes-with-minikube) @@ -49,7 +66,9 @@ These instructions will prepare and submit the content and assets for this decon 1. Submit the content - The `CONTENT_SERVICE_APIKEY` must match the `ADMIN_APIKEY` set when deploying the [content service](https://github.com/deconst/content-service#deconst-dev-env-in-kubernetes-with-minikube). + The `CONTENT_SERVICE_APIKEY` must match the `ADMIN_APIKEY` set when + deploying the [content + service](https://github.com/deconst/content-service#deconst-dev-env-in-kubernetes-with-minikube). ```bash export CONTENT_SERVICE_URL=$(minikube service --url --namespace deconst content) @@ -63,7 +82,10 @@ These instructions will prepare and submit the content and assets for this decon 1. Prepare the staging content - The [staging environment](https://deconst.horse/developing/staging/) is a specially configured content service and presenter pair that allow users to preview content. Normally the Strider server will push preview content to the staging environment but this is how you would manually do it. + The [staging environment](https://deconst.horse/developing/staging/) is a + specially configured content service and presenter pair that allow users to + preview content. Normally the Strider server will push preview content to + the staging environment but this is how you would manually do it. ```bash export CONTENT_ID_BASE=https://github.com/staging/deconst/deconst-docs/ @@ -100,7 +122,9 @@ These instructions will prepare and submit the content and assets for this decon 1. Recreate the content DB - The content is stored in the mongo DB files at `/data/deconst/mongo` in the minikube VM. Delete the DB files and the mongo pod. Kubernetes will automatically restart the mongo pod. + The content is stored in the mongo DB files at `/data/deconst/mongo` in the + minikube VM. Delete the DB files and the mongo pod. Kubernetes will + automatically restart the mongo pod. ```bash minikube ssh @@ -110,4 +134,5 @@ These instructions will prepare and submit the content and assets for this decon kubectl delete po/$MONGO_POD_NAME --namespace deconst ``` - Now you can run through the instructions above again to prepare and submit the content. + Now you can run through the instructions above again to prepare and submit + the content. diff --git a/developing/architecture.rst b/developing/architecture.rst index 3b2556d..f4c39e2 100644 --- a/developing/architecture.rst +++ b/developing/architecture.rst @@ -1,33 +1,54 @@ Architecture ============ -Deconst is distributed as a set of Docker containers, deployed to a cluster of CoreOS hosts by an Ansible playbook. Containers are organized into sets of linked services called "pods." Deconst can be scaled by both launching additional worker hosts and by starting a greater number of pods on each host. +Deconst is distributed as a set of Docker containers, deployed to a cluster of +CoreOS hosts by an Ansible playbook. Containers are organized into sets of +linked services called "pods." Deconst can be scaled by both launching +additional worker hosts and by starting a greater number of pods on each host. .. note:: - The name "pod" is taken from Kubernetes, but I'm using it to mean something slightly different here. A Kubernetes pod is also a set of related containers, but they share networking and cgroup attributes, which our pods do not do. The containers within a Deconst pod are only related by regular Docker network links. + The name "pod" is taken from Kubernetes, but I'm using it to mean something + slightly different here. A Kubernetes pod is also a set of related containers, + but they share networking and cgroup attributes, which our pods do not do. The + containers within a Deconst pod are only related by regular Docker network + links. - I'm not changing it now because there's a good chance we *will* be on Kubernetes at some point. + I'm not changing it now because there's a good chance we *will* be on + Kubernetes at some point. This is how the world interacts with a Deconst cluster: .. image:: /_images/deconst-external.png -None of the service containers store any internal, persistent state: the sources of truth for all Deconst state are Cloud Files containers, MongoDB collections, or GitHub repositories. This means that you can adaptively destroy or launch Deconst worker hosts without fear of losing information. +None of the service containers store any internal, persistent state: the sources +of truth for all Deconst state are Cloud Files containers, MongoDB collections, +or GitHub repositories. This means that you can adaptively destroy or launch +Deconst worker hosts without fear of losing information. Each pod includes the following arrangement of interlinked service containers: .. image:: /_images/deconst-internal.png -On the build host, a dedicated `Strider CD `_ continuous integration server manages cluster-internal and automatically created builds. It also includes a set of service containers that act as a :ref:`staging environment ` that can be used to preview content in context before it's merged and shipped. +On the build host, a dedicated `Strider CD +`_ continuous integration server manages +cluster-internal and automatically created builds. It also includes a set of +service containers that act as a :ref:`staging environment ` that can +be used to preview content in context before it's merged and shipped. .. image:: /_images/deconst-build.png -Access to Strider is managed by membership in a GitHub organization or in teams within an organization, as configured in the instance's credentials file. +Access to Strider is managed by membership in a GitHub organization or in teams +within an organization, as configured in the instance's credentials file. -Strider is prepopulated with a build for the instance's control repository that preprocesses and submits site-wide assets to the content service, and automatically creates new content builds based on a list in a configuration file. +Strider is prepopulated with a build for the instance's control repository that +preprocesses and submits site-wide assets to the content service, and +automatically creates new content builds based on a list in a configuration +file. -The asset preparer, content preparer, and submitter processes are run in isolated Docker containers, sharing a workspace with Strider by a data volume container. +The asset preparer, content preparer, and submitter processes are run in +isolated Docker containers, sharing a workspace with Strider by a data volume +container. Components ---------- @@ -35,62 +56,124 @@ Components .. glossary:: preparer - Process responsible for converting a :term:`content repository` into a directory tree of - :term:`metadata envelopes`, each of which contains one page of rendered HTML and associated - metadata. + Process responsible for converting a :term:`content repository` into a + directory tree of :term:`metadata envelopes`, each of which contains one + page of rendered HTML and associated metadata. - There is one preparer for each supported format of :term:`content repository`; current, - Sphinx and Jekyll. The preparer is executed by a CI/CD system on each commit to the - repository. + There is one preparer for each supported format of :term:`content + repository`; current, Sphinx and Jekyll. The preparer is executed by a CI/CD + system on each commit to the repository. submitter - Process responsible for traversing directories populated with :term:`metadata envelopes` and asset files and submitting them to the :term:`content service`. The submitter submits content and assets in bulk transactions and avoids submitting unchanged content. + Process responsible for traversing directories populated with + :term:`metadata envelopes` and asset files and submitting them to the + :term:`content service`. The submitter submits content and assets in bulk + transactions and avoids submitting unchanged content. content service - Service that accepts submissions and queries for the most recent :term:`metadata envelope` - associated with a specific :term:`content ID`. + Service that accepts submissions and queries for the most recent + :term:`metadata envelope` associated with a specific :term:`content ID`. presenter - Accepts HTTP requests from users. Maps the requested :term:`presented URL` to a :term:`content ID` using the latest known version of the content mapping within the control repository, then accesses the requested :term:`metadata envelope` using the :term:`content service`. Injects the envelope into an appropriate :term:`template` and send the final HTML back in an HTTP response. + Accepts HTTP requests from users. Maps the requested :term:`presented URL` + to a :term:`content ID` using the latest known version of the content + mapping within the control repository, then accesses the requested + :term:`metadata envelope` using the :term:`content service`. Injects the + envelope into an appropriate :term:`template` and send the final HTML back + in an HTTP response. nginx - Reverse proxy that accepts requests from off of the host, terminates TLS, and delegates to the local :term:`presenter` and :term:`content service`. + Reverse proxy that accepts requests from off of the host, terminates TLS, + and delegates to the local :term:`presenter` and :term:`content service`. strider - A continuous integration server integrated with Deconst to provide on-cluster preparer and submitter runs. + A continuous integration server integrated with Deconst to provide + on-cluster preparer and submitter runs. Lifecycle of an HTTP Request ---------------------------- When a content consumer initiates an HTTPS request: -#. The Cloud Load Balancer proxies the request to one of the registered :term:`nginx` containers. -#. :term:`nginx` terminates TLS and, in turn, proxies the request to its linked :term:`presenter`. -#. The :term:`presenter` queries its content map with the :term:`presented URL` to discover the :term:`content ID` of the content that should be rendered at that path. -#. Next, the presenter queries the :term:`content service` to acquire the content for that ID. The content service locates the appropriate :term:`metadata envelope`, all site-wide assets, and performs any necessary post-processing. -#. If any :term:`addenda` are requested by the current envelope, each addenda envelope is fetched from the content service. -#. The presenter locates the Nunjucks :term:`template` that should be used to decorate the raw content based on a regular expression match on the presented URL. If no template is routed, this request is skipped and a null layout (that renders the envelope's body directly) is used. -#. The presenter renders the metadata envelope using the layout. The resulting HTML document is returned to the user. +#. The Cloud Load Balancer proxies the request to one of the registered + :term:`nginx` containers. + +#. :term:`nginx` terminates TLS and, in turn, proxies the request to its linked + :term:`presenter`. + +#. The :term:`presenter` queries its content map with the :term:`presented URL` + to discover the :term:`content ID` of the content that should be rendered at + that path. + +#. Next, the presenter queries the :term:`content service` to acquire the + content for that ID. The content service locates the appropriate :term:`metadata + envelope`, all site-wide assets, and performs any necessary post-processing. + +#. If any :term:`addenda` are requested by the current envelope, each addenda + envelope is fetched from the content service. + +#. The presenter locates the Nunjucks :term:`template` that should be used to + decorate the raw content based on a regular expression match on the presented + URL. If no template is routed, this request is skipped and a null layout (that + renders the envelope's body directly) is used. + +#. The presenter renders the metadata envelope using the layout. The resulting + HTML document is returned to the user. + Lifecycle of a Control Repository Update ---------------------------------------- When a change is merged into the live branch of the :term:`control repository`: -#. A Strider build executes the asset :term:`preparer` on the latest commit of the repository. Stylesheets, javascript, images, and fonts found within the ``assets`` directory are compiled, concatenated, minified, and submitted to the :term:`content service` to be fingerprinted, stored on the CDN-enabled asset container, and made available as global assets to all metadata envelopes. -#. Once all assets have been published, the preparer sends the latest git commit SHA of the control repository to the :term:`content service`, where it's stored in MongoDB. -#. Each entry within the ``content-repositories.json`` file is checked against the list of :term:`strider` builds. If any new entries have been added, a content build is created and configured with a newly issued API key. -#. During each request, each :term:`presenter` queries its linked :term:`content service` for the active control repository SHA. If it doesn't match last-loaded control repository SHA, the presenter triggers an asynchronous update. -#. If successful, the new content and template mappings, redirects, and templates are atomically installed. Otherwise, the presenter logs an error with the details and waits for further changes before attempting to reload. +#. A Strider build executes the asset :term:`preparer` on the latest commit of + the repository. Stylesheets, javascript, images, and fonts found within the + ``assets`` directory are compiled, concatenated, minified, and submitted to the + :term:`content service` to be fingerprinted, stored on the CDN-enabled asset + container, and made available as global assets to all metadata envelopes. + +#. Once all assets have been published, the preparer sends the latest git commit + SHA of the control repository to the :term:`content service`, where it's stored + in MongoDB. + +#. Each entry within the ``content-repositories.json`` file is checked against + the list of :term:`strider` builds. If any new entries have been added, a + content build is created and configured with a newly issued API key. + +#. During each request, each :term:`presenter` queries its linked :term:`content + service` for the active control repository SHA. If it doesn't match last-loaded + control repository SHA, the presenter triggers an asynchronous update. + +#. If successful, the new content and template mappings, redirects, and + templates are atomically installed. Otherwise, the presenter logs an error with + the details and waits for further changes before attempting to reload. Lifecycle of a Content Repository Update ---------------------------------------- When a change is merged into the live branch of a :term:`content repository`: -#. A Strider build scans the latest commit of the repository for directories containing ``_deconst.json`` files and executes the appropriate :term:`preparer` within a Docker container that's given each context. -#. The preparer copies each referenced asset to an asset output directory within the shared workspace container. The offset of the asset reference is saved in an "asset_offsets" map. -#. The preparer generates a :term:`metadata envelope` for each page that would be rendered, assigns it a :term:`content ID` using a configured base ID, and writes it to the envelope output directory. -#. The submitter queries the :term:`content service` with the SHA-256 fingerprints of each asset in the asset directory. If any assets are missing or have changed, the submitter bulk-uploads them to the :term:`content service` API. If more than 30MB of assets need to be uploaded, assets are uploaded in batches of just over 30MB to avoid overwhelming the upload process. -#. The submitter inserts the public CDN URLs of each asset into the body of each metadata envelope at the recorded offsets and removes the "asset_offsets" key. -#. The submitter queries the content service with the SHA-256 fingerprint of a stable (key-sorted) representation of each envelope. Any envelopes that have been changed are bulk-uploaded to the content service. +#. A Strider build scans the latest commit of the repository for directories + containing ``_deconst.json`` files and executes the appropriate :term:`preparer` + within a Docker container that's given each context. + +#. The preparer copies each referenced asset to an asset output directory within + the shared workspace container. The offset of the asset reference is saved in an + "asset_offsets" map. + +#. The preparer generates a :term:`metadata envelope` for each page that would + be rendered, assigns it a :term:`content ID` using a configured base ID, and + writes it to the envelope output directory. + +#. The submitter queries the :term:`content service` with the SHA-256 + fingerprints of each asset in the asset directory. If any assets are missing or + have changed, the submitter bulk-uploads them to the :term:`content service` + API. If more than 30MB of assets need to be uploaded, assets are uploaded in + batches of just over 30MB to avoid overwhelming the upload process. + +#. The submitter inserts the public CDN URLs of each asset into the body of each + metadata envelope at the recorded offsets and removes the "asset_offsets" key. + +#. The submitter queries the content service with the SHA-256 fingerprint of a + stable (key-sorted) representation of each envelope. Any envelopes that have + been changed are bulk-uploaded to the content service. diff --git a/developing/envelope.rst b/developing/envelope.rst index cbfa61b..7e21cb1 100644 --- a/developing/envelope.rst +++ b/developing/envelope.rst @@ -3,9 +3,13 @@ Metadata Envelope Schema ======================== -Much of the deconst system involves the manipulation of :term:`metadata envelopes`, the JSON documents produced by each :term:`preparer` that contain the actual content to render. To be presented properly, envelopes must adhere to a common schema. +Much of the deconst system involves the manipulation of :term:`metadata +envelopes`, the JSON documents produced by each :term:`preparer` that contain +the actual content to render. To be presented properly, envelopes must adhere to +a common schema. -This is an example envelope that demonstrates the full document structure, including all optional fields: +This is an example envelope that demonstrates the full document structure, +including all optional fields: .. code-block:: json @@ -46,7 +50,8 @@ This is an example envelope that demonstrates the full document structure, inclu .. glossary:: body - The only required field for a valid envelope. It contains the pre-rendered HTML of the page. + The only required field for a valid envelope. It contains the pre-rendered + HTML of the page. title The page title or blog post name used for this document. @@ -55,7 +60,8 @@ This is an example envelope that demonstrates the full document structure, inclu The table of contents for this page as a fragment of rendered HTML. content_type - If specified, set the Content-Type of the response containing this document. Defaults to text/html; charset=utf-8. + If specified, set the Content-Type of the response containing this document. + Defaults to text/html; charset=utf-8. author Name of the author who wrote this content. @@ -64,22 +70,29 @@ This is an example envelope that demonstrates the full document structure, inclu A brief paragraph describing the :term:`author`. publish_date - Approximate timestamp on which this piece of content was published, formatted as an RFC2822 string. + Approximate timestamp on which this piece of content was published, + formatted as an RFC2822 string. tags - An array of content classification strings that may be normalized or supplemented with machine-generated information. + An array of content classification strings that may be normalized or + supplemented with machine-generated information. categories - An array of content classification strings that are explicitly user-provided and chosen from a list fixed in the control repository. + An array of content classification strings that are explicitly user-provided + and chosen from a list fixed in the control repository. keywords An array of terms to supplement full-text search indexing. unsearchable - If present and set to ``true``, this envelope will be excluded from the full-text search index. Use this for content that hasn't been :ref:`mapped ` yet or documents like RSS feeds, ``robots.txt`` files, and other site metadata. + If present and set to ``true``, this envelope will be excluded from the + full-text search index. Use this for content that hasn't been :ref:`mapped + ` yet or documents like RSS feeds, ``robots.txt`` files, and + other site metadata. disqus - An object that controls the inclusion of Disqus comments on the current page. If present, must be an object with the following structure: + An object that controls the inclusion of Disqus comments on the current + page. If present, must be an object with the following structure: .. code-block:: json @@ -89,11 +102,16 @@ This is an example envelope that demonstrates the full document structure, inclu "embed": true } - **include** toggles the inclusion of any Disqus content at all. **short_name** is used to link to a specific Disqus account. **embed** toggles the included script between an *embedding script* that injects a Disqus comment form on this page and a *count script* that decorates links with a comment count. + **include** toggles the inclusion of any Disqus content at all. + **short_name** is used to link to a specific Disqus account. **embed** + toggles the included script between an *embedding script* that injects a + Disqus comment form on this page and a *count script* that decorates links + with a comment count. next previous - These objects, if included, provide navigational links to adjacent documents in a sequence. If present, must be an object with the following structure: + These objects, if included, provide navigational links to adjacent documents + in a sequence. If present, must be an object with the following structure: .. code-block:: json @@ -102,15 +120,26 @@ This is an example envelope that demonstrates the full document structure, inclu "url": "../next-page" } - If the ``url`` key is absolute (rooted at the document root, like ``/blog/other-post``), the presenter will re-root it based on the current mapping of the content repository. If it's relative, it will be left as-is. + If the ``url`` key is absolute (rooted at the document root, like + ``/blog/other-post``), the presenter will re-root it based on the current + mapping of the content repository. If it's relative, it will be left as-is. addenda - Cross-references to related documents that should be fetched along with this envelope to be made available to the template. Each document's envelope is available as ``deconst.addenda..envelope``. Most likely, the attribute you want is ``deconst.addenda..envelope.body``. + Cross-references to related documents that should be fetched along with this + envelope to be made available to the template. Each document's envelope is + available as ``deconst.addenda..envelope``. Most likely, the attribute + you want is ``deconst.addenda..envelope.body``. asset_offsets - This key must only be present in the intermediate representation used to communicate between a preparer and the submitter. Its keys are local paths to asset files relative to the asset directory. Each value is an array of character offsets into ``body`` that should be replaced by the full, public URL of the asset. - -The documents retrieved from the content store consist of the requested envelope and a number of additional attributes that are derived and injected at retrieval time. The full content document looks like this: + This key must only be present in the intermediate representation used to + communicate between a preparer and the submitter. Its keys are local paths + to asset files relative to the asset directory. Each value is an array of + character offsets into ``body`` that should be replaced by the full, public + URL of the asset. + +The documents retrieved from the content store consist of the requested envelope +and a number of additional attributes that are derived and injected at retrieval +time. The full content document looks like this: .. code-block:: json diff --git a/developing/index.rst b/developing/index.rst index 167a1da..ad64169 100644 --- a/developing/index.rst +++ b/developing/index.rst @@ -1,7 +1,10 @@ Developing Deconst ================== -If you'd like to start contributing to Deconst, welcome! The resources found here should help you set up your development environment, understand how the parts of the system work together, and decipher us when we talk about things like "content IDs" and "presented URLs". +If you'd like to start contributing to Deconst, welcome! The resources found +here should help you set up your development environment, understand how the +parts of the system work together, and decipher us when we talk about things +like "content IDs" and "presented URLs". .. toctree:: diff --git a/developing/preparer.rst b/developing/preparer.rst index 3cb72e7..fbdcb4c 100644 --- a/developing/preparer.rst +++ b/developing/preparer.rst @@ -3,27 +3,67 @@ Writing a Preparer ================== -If you want to include content from a new :term:`content repository` format, you'll need to create a new :term:`preparer`. Generally, a preparer needs to: +If you want to include content from a new :term:`content repository` format, +you'll need to create a new :term:`preparer`. Generally, a preparer needs to: -#. Parse the markup language, configuration files, and other metadata for some content format. When possible, you should use the format's native libraries and tooling to do so. -#. Parse the ``_deconst.json`` file. Consult the :ref:`new content repository section ` for its schema. -#. Copy assets (usually images) to the directory specified by the environment variable ``ASSET_DIR``. It's best to preserve as much of the local directory structure as possible from the source repository, unless two assets in different subdirectories have the same filename. -#. Use the markup to produce rendered HTML. The preparer should use a single-character placeholder for each asset URL. As it does so, it should generate a map that associates the path of each asset relative to ``ASSET_DIR`` to a collection of character offsets within the body text at which that asset is referenced. +#. Parse the markup language, configuration files, and other metadata for some + content format. When possible, you should use the format's native libraries and + tooling to do so. - As a rule, the rendered HTML *should omit any layouts* from the content repository itself and only render the page content, unadorned. In Deconst, templates will be applied :ref:`later, from the control repository `. This is important to ensure a consistent look and feel across many content repositories published to the same site, as well as allowing users to take advantage of presenter-implemented features like :ref:`search `. +#. Parse the ``_deconst.json`` file. Consult the :ref:`new content repository + section ` for its schema. -#. Assemble the content into one or more :term:`metadata envelopes` that match the :ref:`envelope schema `. If any assets were referenced, include the asset offset map as the ``asset_offsets`` element. Write each completed envelope to the directory specified by the environment variable ``ENVELOPE_DIR`` as a file with the filename pattern ``.json``. +#. Copy assets (usually images) to the directory specified by the environment + variable ``ASSET_DIR``. It's best to preserve as much of the local directory + structure as possible from the source repository, unless two assets in different + subdirectories have the same filename. + +#. Use the markup to produce rendered HTML. The preparer should use a + single-character placeholder for each asset URL. As it does so, it should + generate a map that associates the path of each asset relative to ``ASSET_DIR`` + to a collection of character offsets within the body text at which that asset is + referenced. + + As a rule, the rendered HTML *should omit any layouts* from the content + repository itself and only render the page content, unadorned. In Deconst, + templates will be applied :ref:`later, from the control repository + `. This is important to ensure a consistent look and feel + across many content repositories published to the same site, as well as + allowing users to take advantage of presenter-implemented features like + :ref:`search `. + +#. Assemble the content into one or more :term:`metadata envelopes` that match + the :ref:`envelope schema `. If any assets were referenced, + include the asset offset map as the ``asset_offsets`` element. Write each + completed envelope to the directory specified by the environment variable + ``ENVELOPE_DIR`` as a file with the filename pattern ``.json``. Docker Container Protocol ------------------------- -If you run your preparer in an independent environment (like a non-Deconst continuous integration server), anything that implements the process above will work fine. If you want your preparer to work within the Deconst client or to be available to :ref:`automatically created Strider builds `, you need to package your preparer in a Docker container image that obeys the container protocol described here. +If you run your preparer in an independent environment (like a non-Deconst +continuous integration server), anything that implements the process above will +work fine. If you want your preparer to work within the Deconst client or to be +available to :ref:`automatically created Strider builds +`, you need to package your preparer in a Docker +container image that obeys the container protocol described here. Deconst preparer containers should respect the following configuration values: * ``ASSET_DIR``: The preparer must copy assets to this directory tree. -* ``ENVELOPE_DIR``: The preparer must write completed envelopes to this directory. -* ``CONTENT_ID_BASE``: *(optional)* If set, this should *override* the content ID base specified in ``_deconst.json`` for this preparation run, preferably with some kind of message if they differ. -* ``CONTENT_ROOT``: *(optional)* If specified, the preparer should prepare content mounted to a volume at this path within the container. Otherwise, it should default to preparing ``/usr/content-repo``. -When run with no arguments, the preparer container should prepare the content as described above, then exit with an exit status of 0 if preparation was successful, or nonzero if it was not. +* ``ENVELOPE_DIR``: The preparer must write completed envelopes to this + directory. + +* ``CONTENT_ID_BASE``: *(optional)* If set, this should *override* the content + ID base specified in ``_deconst.json`` for this preparation run, preferably with + some kind of message if they differ. + +* ``CONTENT_ROOT``: *(optional)* If specified, the preparer should prepare + content mounted to a volume at this path within the container. Otherwise, it + should default to preparing ``/usr/content-repo``. + +When run with no arguments, the preparer container should prepare the content as +described above, then exit with an exit status of 0 if preparation was +successful, or nonzero if it was not. diff --git a/developing/setup.rst b/developing/setup.rst index 8675774..243fe3c 100644 --- a/developing/setup.rst +++ b/developing/setup.rst @@ -1,16 +1,24 @@ Development Environment ======================= -Before you can contribute to Deconst development, you'll first need to prepare your local development machine with a few dependencies. We use OSX, but any operating system that can run Docker containers should be usable. +Before you can contribute to Deconst development, you'll first need to prepare +your local development machine with a few dependencies. We use OSX, but any +operating system that can run Docker containers should be usable. Prerequisities -------------- -Deconst packages its services and dependencies as Docker containers. This helps to minimize the number of `yaks that you need to shave `_ to get started, but you still need to shave the yak for Docker itself. +Deconst packages its services and dependencies as Docker containers. This helps +to minimize the number of `yaks that you need to shave +`_ to get started, but you still need +to shave the yak for Docker itself. -#. Follow the `installation guide for Docker `_ for your platform of choice. +#. Follow the `installation guide for Docker + `_ for your platform of + choice. - You'll know you're ready to continue when you can successfully execute the following from a terminal: + You'll know you're ready to continue when you can successfully execute the + following from a terminal: .. code-block:: bash @@ -26,75 +34,108 @@ Deconst packages its services and dependencies as Docker containers. This helps Git commit (server): 7c8fca2 OS/Arch (server): linux/amd64 -#. We also use Docker Compose to orchestrate small numbers of local containers to make development more convenient. Follow the `installation guide for Docker Compose `_. +#. We also use Docker Compose to orchestrate small numbers of local containers + to make development more convenient. Follow the `installation guide for Docker + Compose `_. -#. To contribute, you'll also need a reasonable `git `_ client. It's likely that you already have one: open a terminal and type ``git version`` to check. +#. To contribute, you'll also need a reasonable `git `_ + client. It's likely that you already have one: open a terminal and type ``git + version`` to check. Individual Service Development ------------------------------ -Each Deconst service is developed in a separate GitHub repository within the `deconst organization `_. Fork and clone the repository of the service you wish to contribute to. +Each Deconst service is developed in a separate GitHub repository within the +`deconst organization `_. Fork and clone the +repository of the service you wish to contribute to. -To run the service, run the following from the top-level directory of your clone: +To run the service, run the following from the top-level directory of your +clone: .. code-block:: bash - docker-compose up + docker-compose up -Compose will launch a container for the service you're focusing on right now, as well as any upstream services or infrastructure that it depends on, and link them all together correctly. You'll see the combined logs for all containers on your terminal. As you edit source code in your editor of choice, the service within the container will automatically reload with your changes, so you can explore the effects live. +Compose will launch a container for the service you're focusing on right now, as +well as any upstream services or infrastructure that it depends on, and link +them all together correctly. You'll see the combined logs for all containers on +your terminal. As you edit source code in your editor of choice, the service +within the container will automatically reload with your changes, so you can +explore the effects live. .. note:: - Unless you're developing on Linux, which can run Docker containers natively, it's likely that the Docker containers you run actually live within a virtual machine. As a consequence, you won't be able to reach your services at "localhost", but rather some other IP. The exact IP depends on the way you installed docker. For example, if you're using ``docker-machine``, running ``docker-machine ip dev`` will show you the IP. - -Although your local source changes will take effect immediately, you may need to periodically fetch newer versions of upstream containers, as development progresses on the other parts of the system. To ensure that you have the latest builds of each container, run ``docker-compose pull``. Also, if you need to change the service's dependencies, you may need to rebuild your working container with ``docker-compose build``. - -Compose can also be used to launch its containers in the background (with ``docker-compose up -d``), explore logs for individual containers rather than aggregated, or run one-off processes in the context of any service container. Consult the `compose documentation `_ to see all of your options. - -Each service's unit tests can also be executed within a Docker container for convenience. As a convention, the following script will launch the container and run all tests: + Unless you're developing on Linux, which can run Docker containers natively, + it's likely that the Docker containers you run actually live within a virtual + machine. As a consequence, you won't be able to reach your services at + "localhost", but rather some other IP. The exact IP depends on the way you + installed docker. For example, if you're using ``docker-machine``, running + ``docker-machine ip dev`` will show you the IP. + +Although your local source changes will take effect immediately, you may need to +periodically fetch newer versions of upstream containers, as development +progresses on the other parts of the system. To ensure that you have the latest +builds of each container, run ``docker-compose pull``. Also, if you need to +change the service's dependencies, you may need to rebuild your working +container with ``docker-compose build``. + +Compose can also be used to launch its containers in the background (with +``docker-compose up -d``), explore logs for individual containers rather than +aggregated, or run one-off processes in the context of any service container. +Consult the `compose documentation `_ to +see all of your options. + +Each service's unit tests can also be executed within a Docker container for +convenience. As a convention, the following script will launch the container and +run all tests: .. code-block:: bash - script/test + script/test Integration Testing ------------------- -To verify that the entire Deconst system works together, use the **integrated** repository. "Integrated" contains a compose file that executes a single "pod" of related deconst services on your local host, so you can test all of the services together. +To verify that the entire Deconst system works together, use the **integrated** +repository. "Integrated" contains a compose file that executes a single "pod" of +related deconst services on your local host, so you can test all of the services +together. Clone the deconst/integrated repository and run ``script/up`` to begin: .. code-block:: bash - git clone https://github.com/deconst/integrated.git deconst-integrated - cd deconst-integrated + git clone https://github.com/deconst/integrated.git deconst-integrated + cd deconst-integrated - # Customize your environment - cp env.example env - ${EDITOR} env + # Customize your environment + cp env.example env + ${EDITOR} env - # Launch all services - script/up + # Launch all services + script/up -While your services are alive, you can run ``script/add-sphinx``, ``script/add-jekyll``, and ``script/add-assets`` to invoke an appropriate :term:`preparer` and submit content to your local deconst system. +While your services are alive, you can run ``script/add-sphinx``, +``script/add-jekyll``, and ``script/add-assets`` to invoke an appropriate +:term:`preparer` and submit content to your local deconst system. Alternative: Manual Setup of Development Environment ---------------------------------------------------- .. code-block:: bash - # generate an API key for the content service - APIKEY=$(hexdump -v -e '1/1 "%.2x"' -n 128 /dev/random) - echo "Content Service Admin API Key:" $APIKEY + # generate an API key for the content service + APIKEY=$(hexdump -v -e '1/1 "%.2x"' -n 128 /dev/random) + echo "Content Service Admin API Key:" $APIKEY - # startup content service dependencies - docker run -d --name elasticsearch elasticsearch:1.7 - docker run -d --name mongo mongo:2.6 + # startup content service dependencies + docker run -d --name elasticsearch elasticsearch:1.7 + docker run -d --name mongo mongo:2.6 - # build and deploy the content service - cd {wherever you have the deconst/content-service} - docker build --tag content-service:1.0.0 . - docker run -d -p 9000:8080 \ + # build and deploy the content service + cd {wherever you have the deconst/content-service} + docker build --tag content-service:1.0.0 . + docker run -d -p 9000:8080 \ -e NODE_ENV=development \ -e STORAGE=memory \ -e MONGODB_URL=mongodb://mongo:27017/content \ @@ -105,10 +146,10 @@ Alternative: Manual Setup of Development Environment --name content \ content-service:1.0.0 script/inside/dev - # build and deploy the presenter service - cd {wherever you have the deconst/presenter} - docker build --tag presenter-service:1.0.0 . - docker run -d -p 80:8080 \ + # build and deploy the presenter service + cd {wherever you have the deconst/presenter} + docker build --tag presenter-service:1.0.0 . + docker run -d -p 80:8080 \ -e NODE_ENV=development \ -e CONTROL_REPO_PATH=/var/control-repo \ -e CONTROL_REPO_URL=https://github.com/j12y/nexus-control.git \ diff --git a/developing/staging.rst b/developing/staging.rst index 92a9032..b441a8f 100644 --- a/developing/staging.rst +++ b/developing/staging.rst @@ -3,25 +3,49 @@ Staging Environment =================== -The staging environment is a specially configured :term:`content service` and :term:`presenter` pair that allow users to preview content, in context with the rest of the hosted site, before it's live and shown to end users. +The staging environment is a specially configured :term:`content service` and +:term:`presenter` pair that allow users to preview content, in context with the +rest of the hosted site, before it's live and shown to end users. -A specific build of previewed content is identified by a :term:`revision ID`. This gives each build a unique environment and enables the staging environment to host many revisions independently of one another. +A specific build of previewed content is identified by a :term:`revision ID`. +This gives each build a unique environment and enables the staging environment +to host many revisions independently of one another. Submitting Content to the Staging Environment --------------------------------------------- -To submit content to the staging :term:`content service`, run the normal :term:`preparer`, but: +To submit content to the staging :term:`content service`, run the normal +:term:`preparer`, but: #. Set ``CONTENT_SERVICE_URL`` to the staging environment's API endpoint. -#. Prepend a revision ID as the first URL path segment of your :term:`content ID` base. For example, if your content repository's normal content ID base is ``https://github.com/deconst/deconst-docs/``, set ``CONTENT_ID_BASE`` to ``https://github.com/build-abcdef/deconst/deconst-docs/`` instead. -The revision ID is arbitrary, but it should be chosen to be relatively unique among everyone who's staging changes so you don't overwrite one another's staged content by mistake. Good examples include something containing part of your current git SHA, your username, or a timestamp of some kind. +#. Prepend a revision ID as the first URL path segment of your :term:`content + ID` base. For example, if your content repository's normal content ID base is + ``https://github.com/deconst/deconst-docs/``, set ``CONTENT_ID_BASE`` to + ``https://github.com/build-abcdef/deconst/deconst-docs/`` instead. -To amend an existing revision's content, re-run the preparer with the same revision ID. To append content from a different content repository to the same staging environment, run its preparer with the revision ID. +The revision ID is arbitrary, but it should be chosen to be relatively unique +among everyone who's staging changes so you don't overwrite one another's staged +content by mistake. Good examples include something containing part of your +current git SHA, your username, or a timestamp of some kind. + +To amend an existing revision's content, re-run the preparer with the same +revision ID. To append content from a different content repository to the same +staging environment, run its preparer with the revision ID. Viewing Staged Content ---------------------- -To see the content that you've just staged, visit the staging :term:`presenter`'s address and prepend your revision ID to the URL path. For example, if you just built content that's normally mapped to the path ``/docs/`` to a staging server that's available at ``https://staging.example.com/`` with the revision id ``user-smashwilson``, your staged content will be visible at ``https://staging.example.com/user-smashwilson/docs/``. - -The rest of the site will be *also* be visible beneath the parent ``/user-smashwilson/`` path exactly as it appears on the current production site. Any links on any rendered page will be manipulated such that they will point to the equivalent content within the same revision ID. This means that you can click around the staging environment, using site navigation normally, without accidentally jumping to the production endpoint instead. +To see the content that you've just staged, visit the staging +:term:`presenter`'s address and prepend your revision ID to the URL path. For +example, if you just built content that's normally mapped to the path ``/docs/`` +to a staging server that's available at ``https://staging.example.com/`` with +the revision id ``user-smashwilson``, your staged content will be visible at +``https://staging.example.com/user-smashwilson/docs/``. + +The rest of the site will be *also* be visible beneath the parent +``/user-smashwilson/`` path exactly as it appears on the current production +site. Any links on any rendered page will be manipulated such that they will +point to the equivalent content within the same revision ID. This means that you +can click around the staging environment, using site navigation normally, +without accidentally jumping to the production endpoint instead. diff --git a/developing/terminology.rst b/developing/terminology.rst index 184aa73..4bb0d7c 100644 --- a/developing/terminology.rst +++ b/developing/terminology.rst @@ -1,64 +1,83 @@ Terminology =========== -It's important to have a shared vocabulary when talking about complicated software systems. We generally try to consistently use these terms in our code, comments, issues and chat. +It's important to have a shared vocabulary when talking about complicated +software systems. We generally try to consistently use these terms in our code, +comments, issues and chat. .. glossary:: - content repository - content repositories - Location containing material to be included as a subset of the completed site. Often, the - material within will be written in a friendlier human-editable markup format such as - reStructuredText or Markdown. + content repository + content repositories + Location containing material to be included as a subset of the completed + site. Often, the material within will be written in a friendlier + human-editable markup format such as reStructuredText or Markdown. - Initially, we will support support content stored in git repositories on - `GitHub `_, although our architecture will be flexible enough to integrate - content stored anywhere on a network reachable from the build system. + Initially, we will support support content stored in git repositories on + `GitHub `_, although our architecture will be flexible enough + to integrate content stored anywhere on a network reachable from the build + system. control repository - Version controlled repository used to organize and administer a deconst platform. It contains: - - * Plain-text documents that associate subtrees of indexed content, identified by a - :term:`content ID` prefix, with subtrees of :term:`presented URLs` on the presented site. - * Layout templates in `handlebars `_ format. - * Plain-text documents that associate subsets of :term:`presented URLs` with specific layouts. - - content ID - content IDs - Unique identifier assigned to a single page of content generated from a :term:`content - repository`. It's important to note that a content ID is assigned to each *output* page, not - each source document. Depending on the :term:`preparer` and its configuration, these may differ. - Most of the architecture should treat these as opaque strings, although the content map may need to assume that they are hierarchal. - - By convention, these are URLs that join the base URL of the :term:`content repository` with the - relative path of the rendered output page. - - When making calls to the content API, the content ID must be URL encoded. For example, - ``https%3A%2F%2Fgithub.com%2Frackerlabs%2Fdocs-container-service`` is the encoded form of the content ID ``github.com/rackerlabs/docs-container-service``. - - Examples: - - * The version of this page on the default "master" branch: - ``https://github.com/deconst/deconst-docs/running/architecture``. - * A specific post in a Jekyll blog, generated from (theoretical) content at - ``https://github.com/rackerlabs/developer-blog/_posts/mongodb-3.0-getting-started.md``: - ``https://github.com/rackerlabs/developer-blog/blog/mongodb-3.0-getting-started``. - - revision ID - revision IDs - Identifier used to isolate different staging environments from one another. - - presented URL - presented URLs - URL of a page within the final presented content of a deconst site. This should be the full URL, - including the scheme and domain. - - Example: ``https://developer.rackspace.com/sdks/cloud-servers/getting-started/``. - - template - Common markup that surrounds each presented page with navigation, brand identity, - copyright information and anything else that's shared among some subset of each site. - - metadata envelope - metadata envelopes - JSON document that contains a single page's worth of content as a rendered HTML fragment, along with any additional information necessary for the presentation of that page. See :ref:`the schema section ` for a description of the expected structure. + Version controlled repository used to organize and administer a deconst + platform. It contains: + + * Plain-text documents that associate subtrees of indexed content, + identified by a :term:`content ID` prefix, with subtrees of + :term:`presented URLs` on the presented site. + + * Layout templates in `handlebars `_ format. + + * Plain-text documents that associate subsets of :term:`presented URLs` + with specific layouts. + + content ID + content IDs + Unique identifier assigned to a single page of content generated from a + :term:`content repository`. It's important to note that a content ID is + assigned to each *output* page, not each source document. Depending on the + :term:`preparer` and its configuration, these may differ. Most of the + architecture should treat these as opaque strings, although the content map + may need to assume that they are hierarchal. + + By convention, these are URLs that join the base URL of the :term:`content + repository` with the relative path of the rendered output page. + + When making calls to the content API, the content ID must be URL encoded. + For example, + ``https%3A%2F%2Fgithub.com%2Frackerlabs%2Fdocs-container-service`` is the + encoded form of the content ID + ``github.com/rackerlabs/docs-container-service``. + + Examples: + + * The version of this page on the default "master" branch: + ``https://github.com/deconst/deconst-docs/running/architecture``. + + * A specific post in a Jekyll blog, generated from (theoretical) content at + ``https://github.com/rackerlabs/developer-blog/_posts/mongodb-3.0-getting-started.md``: + ``https://github.com/rackerlabs/developer-blog/blog/mongodb-3.0-getting-started``. + + revision ID + revision IDs + Identifier used to isolate different staging environments from one another. + + presented URL + presented URLs + URL of a page within the final presented content of a deconst site. This + should be the full URL, including the scheme and domain. + + Example: + ``https://developer.rackspace.com/sdks/cloud-servers/getting-started/``. + + template + Common markup that surrounds each presented page with navigation, brand + identity, copyright information and anything else that's shared among some + subset of each site. + + metadata envelope + metadata envelopes + JSON document that contains a single page's worth of content as a rendered + HTML fragment, along with any additional information necessary for the + presentation of that page. See :ref:`the schema section ` + for a description of the expected structure. diff --git a/index.rst b/index.rst index aeda8a4..b1fb016 100644 --- a/index.rst +++ b/index.rst @@ -8,12 +8,21 @@ Deconst *Deconstruct your Documentation* -`Deconst `_ is a continuous delivery pipeline used to assemble documentation from a heterogenous set of source repositories. Individual GitHub repositories containing content in :abbr:`.rst (reStructuredText)` or :abbr:`.md (Markdown)` formats are **prepared** by being partially rendered to a common JSON format, then mapped to subtrees of the final page by a **control repository**. +`Deconst `_ is a continuous delivery pipeline used +to assemble documentation from a heterogenous set of source repositories. +Individual GitHub repositories containing content in :abbr:`.rst +(reStructuredText)` or :abbr:`.md (Markdown)` formats are **prepared** by being +partially rendered to a common JSON format, then mapped to subtrees of the final +page by a **control repository**. This guide serves two purposes: -#. It's documentation for deconst itself. If you want to write documentation for a deconst-managed site, or if you want to run a deconst cluster yourself, this will help you get started. -#. It's also used as a concrete example for deconst's development! We use this to dogfood the deconst contribution and renderer workflow. +#. It's documentation for deconst itself. If you want to write documentation for + a deconst-managed site, or if you want to run a deconst cluster yourself, this + will help you get started. + +#. It's also used as a concrete example for deconst's development! We use this + to dogfood the deconst contribution and renderer workflow. Contents: diff --git a/running/liftoff.rst b/running/liftoff.rst index fcd4f8e..df02903 100644 --- a/running/liftoff.rst +++ b/running/liftoff.rst @@ -3,6 +3,10 @@ Liftoff! Before you get started, you'll need a few things: -#. A collection of servers to host your Docker containers. We use `CoreOS `_ images on Rackspace cloud. -#. `Docker `_ installed on each host. CoreOS already has this, but different images may not. +#. A collection of servers to host your Docker containers. We use `CoreOS + `_ images on Rackspace cloud. + +#. `Docker `_ installed on each host. + CoreOS already has this, but different images may not. + #. Probably other things that will materialize once we have real code to ship! diff --git a/running/maintenance.rst b/running/maintenance.rst index 9eccbaf..ddba342 100644 --- a/running/maintenance.rst +++ b/running/maintenance.rst @@ -1,88 +1,140 @@ Maintenance =========== -Once you've got a deconst cluster provisioned and running, you'll want to monitor its health and -have some idea what you can fix when things go wrong. +Once you've got a deconst cluster provisioned and running, you'll want to +monitor its health and have some idea what you can fix when things go wrong. Prerequisites ------------- -Before you can effective maintain a cluster, you'll want to verify that you have these things. +Before you can effective maintain a cluster, you'll want to verify that you have +these things. - * **Credentials for the cloud account** that's being used to manage the cluster's resources. - * **SSH access to the cluster.** You'll need to download the SSH private key used by your deconst instance and put it in `keys/{instance}.private.key` where `instance` is the name of your instance in your `credentials.yml` - * **A clone of the deconst/deploy repository** from `GitHub `_. If you monitor more than one Deconst cluster, it's helpful to have a separate clone for each and name the clone's directory after the cluster rather than just calling them all "deploy." - * **A copy of the credentials.yml file** for the instance. Your ops team should arrange for an out-of-band mechanism to securely distribute this file and stay up to date. Put it in the root directory of your ``deconst/deploy`` clone. + * **Credentials for the cloud account** that's being used to manage the + cluster's resources. + + * **SSH access to the cluster.** You'll need to download the SSH private key + used by your deconst instance and put it in `keys/{instance}.private.key` where + `instance` is the name of your instance in your `credentials.yml` + + * **A clone of the deconst/deploy repository** from `GitHub + `_. If you monitor more than one Deconst + cluster, it's helpful to have a separate clone for each and name the clone's + directory after the cluster rather than just calling them all "deploy." + + * **A copy of the credentials.yml file** for the instance. Your ops team should + arrange for an out-of-band mechanism to securely distribute this file and stay + up to date. Put it in the root directory of your ``deconst/deploy`` clone. Logs ---- -Application logs are consolidated and shipped to an Elasticsearch and Kibana cluster external to the Deconst cluster, so that you can quickly see what's happening across the entire system. The Deconst ELK node hosts Logstash (to manipulate the logs right before they're persisted). Point your browser to the public HTTPS Kibana URL of the external cluster to access Kibana. You can find credentials in your ``credentials.yml`` file: +Application logs are consolidated and shipped to an Elasticsearch and Kibana +cluster external to the Deconst cluster, so that you can quickly see what's +happening across the entire system. The Deconst ELK node hosts Logstash (to +manipulate the logs right before they're persisted). Point your browser to the +public HTTPS Kibana URL of the external cluster to access Kibana. You can find +credentials in your ``credentials.yml`` file: .. code-block:: bash - grep elasticsearch_username credentials.yml - grep elasitcsearch_password credentials.yml + grep elasticsearch_username credentials.yml + grep elasitcsearch_password credentials.yml .. image:: /_images/kibana.jpg -I recommend configuring an additional DNS record of ``logs.`` to point to this host, for convenience. +I recommend configuring an additional DNS record of ``logs.`` to point to this +host, for convenience. -See the `Kibana documentation `_ for more information about using Kibana effectively. +See the `Kibana documentation +`_ for more +information about using Kibana effectively. Scripts ------- -Within your ``deconst/deploy`` clone, there are a number of scripts that are useful for diagnosing and correcting problems on the cluster. +Within your ``deconst/deploy`` clone, there are a number of scripts that are +useful for diagnosing and correcting problems on the cluster. + + * ``script/deploy`` runs the Ansible playbook again. If one or more services + have died, this is a good way to restore the missing ones without interfering + with anything that's already working properly. It's also the best way to + propagate configuration changes through the cluster. + + * ``script/status`` runs ``docker ps -a`` on all worker hosts. This is a good + way to make sure that none of the services have unexpectedly died or are + flapping. + + * ``script/ips`` will show you the hosts and IPs of each system in the cluster. + It's occasionally useful to save a trip to the control panel. + + * ``script/lb`` runs a diagnostic check on the load balancers' node membership, + ensuring that requests are being forwarded to the correct ports on the worker + hosts, based on currently living containers. It can run in either a reporting + mode (``--report``) that prints a summary of the load balancer health, or a + corrective mode (``--fix``) that deletes old nodes and adds new ones. - * ``script/deploy`` runs the Ansible playbook again. If one or more services have died, this is a good way to restore the missing ones without interfering with anything that's already working properly. It's also the best way to propagate configuration changes through the cluster. - * ``script/status`` runs ``docker ps -a`` on all worker hosts. This is a good way to make sure that none of the services have unexpectedly died or are flapping. - * ``script/ips`` will show you the hosts and IPs of each system in the cluster. It's occasionally useful to save a trip to the control panel. - * ``script/lb`` runs a diagnostic check on the load balancers' node membership, ensuring that requests are being forwarded to the correct ports on the worker hosts, based on currently living containers. It can run in either a reporting mode (``--report``) that prints a summary of the load balancer health, or a corrective mode (``--fix``) that deletes old nodes and adds new ones. - * Finally, ``script/ssh `` will give you a shell on a host whose name matches the pattern you provide. Run it with no arguments to see a list of the available hosts; provide any unique substring to identity a host from that list. + * Finally, ``script/ssh `` will give you a shell on a host whose name + matches the pattern you provide. Run it with no arguments to see a list of the + available hosts; provide any unique substring to identity a host from that + list. Systemd ------- -Once you have a shell on a problem system, it's useful to know a few systemd commands to investigate and manage services. +Once you have a shell on a problem system, it's useful to know a few systemd +commands to investigate and manage services. -If you want to really get a handle on what systemd is and how it works, I recommend taking the time to read `systemd for Administrators `_. You'll also want to keep the `man pages `_ bookmarked. In a pinch, though, these commands will do. +If you want to really get a handle on what systemd is and how it works, I +recommend taking the time to read `systemd for Administrators +`_. +You'll also want to keep the `man pages +`_ bookmarked. In a pinch, +though, these commands will do. To **list units** matching a pattern and report their current status: .. code-block:: bash - systemctl list-units deconst-* + systemctl list-units deconst-* -To **view the current status of a unit** in more detail, including the most recent bit of its logs: +To **view the current status of a unit** in more detail, including the most +recent bit of its logs: .. code-block:: bash - systemctl status deconst-content@2 + systemctl status deconst-content@2 To **see the logs for a unit** directly, use: .. code-block:: bash - journalctl -b -u deconst-presenter@1 + journalctl -b -u deconst-presenter@1 To **follow the logs in real time**: .. code-block:: bash - journalctl -f -u deconst-presenter@1 + journalctl -f -u deconst-presenter@1 To **stop, start, or restart** one or more units: .. code-block:: bash - sudo systemctl stop deconst-presenter@1 - sudo systemctl start deconst-content@2 - sudo systemctl restart deconst-logstash + sudo systemctl stop deconst-presenter@1 + sudo systemctl start deconst-content@2 + sudo systemctl restart deconst-logstash If you have to nuke it from orbit --------------------------------- Take a deep breath: it's okay. -When things go so terribly that a cluster is unrecoverable, remember: Deconst stores *all* of its persistent data off-cluster, in Cloud Files, MongoDB and Elasticsearch. The worker hosts are designed to be ephemeral. If you lose ssh access or someone deletes libc or services start flapping and you decide that the system can't recover, you can delete the cloud servers directly, re-provision a new system with the same ``deconst/deploy`` setup (leaving the ``credentials.yml`` file unchanged), and all will be well, no data loss. It takes maybe ten to fifteen minutes. +When things go so terribly that a cluster is unrecoverable, remember: Deconst +stores *all* of its persistent data off-cluster, in Cloud Files, MongoDB and +Elasticsearch. The worker hosts are designed to be ephemeral. If you lose ssh +access or someone deletes libc or services start flapping and you decide that +the system can't recover, you can delete the cloud servers directly, +re-provision a new system with the same ``deconst/deploy`` setup (leaving the +``credentials.yml`` file unchanged), and all will be well, no data loss. It +takes maybe ten to fifteen minutes. diff --git a/running/webhooks.rst b/running/webhooks.rst index c1da264..449b843 100644 --- a/running/webhooks.rst +++ b/running/webhooks.rst @@ -3,40 +3,49 @@ Webhooks and Integration ------------------------ -Deconst uses a combination of webhooks and continuous integration "builds" to stay up to date with changes made to the control and content repositories. Although it *should* manage them itself, if anything isn't updating correctly, these are the first places you should check. +Deconst uses a combination of webhooks and continuous integration "builds" to +stay up to date with changes made to the control and content repositories. +Although it *should* manage them itself, if anything isn't updating correctly, +these are the first places you should check. -You can verify that they're installed correctly by visiting the ``settings/hooks`` page within the relevant GitHub repository. +You can verify that they're installed correctly by visiting the +``settings/hooks`` page within the relevant GitHub repository. -These are the integrations that need to be installed on a **control repository**: +These are the integrations that need to be installed on a **control +repository**: - * A ``.travis.yml`` file that clones and executes the *asset preparer*. It should have the following contents: + * A ``.travis.yml`` file that clones and executes the *asset preparer*. It + should have the following contents: - .. code-block:: yaml + .. code-block:: yaml - --- - language: node_js - node_js: - - "0.12" - install: - - git clone --depth 1 https://github.com/deconst/preparer-asset.git /tmp/preparer-asset - script: - - /tmp/preparer-asset/build.sh + --- + language: node_js + node_js: + - "0.12" + install: + - git clone --depth 1 https://github.com/deconst/preparer-asset.git /tmp/preparer-asset + script: + - /tmp/preparer-asset/build.sh - .. end the code block. +.. end the code block. -These are the integrations that need to be installed on each **content repository**: +These are the integrations that need to be installed on each +**content repository**: - * A ``.travis.yml`` file that clones and executes the appropriate :term:`preparer` for that repository type. Here's an example for a Sphinx repository: + * A ``.travis.yml`` file that clones and executes the appropriate + :term:`preparer` for that repository type. Here's an example for a Sphinx + repository: - .. code-block:: yaml + .. code-block:: yaml - --- - language: python - python: - - "3.4" - install: - - "pip install -e git+https://github.com/deconst/preparer-sphinx.git#egg=deconstrst" - script: - - deconst-prepare-sphinx + --- + language: python + python: + - "3.4" + install: + - "pip install -e git+https://github.com/deconst/preparer-sphinx.git#egg=deconstrst" + script: + - deconst-prepare-sphinx .. end the code block. diff --git a/writing-docs/author/index.rst b/writing-docs/author/index.rst index 2213347..fc3be1d 100644 --- a/writing-docs/author/index.rst +++ b/writing-docs/author/index.rst @@ -1,11 +1,21 @@ Authoring Content for Deconst ============================= -Deconst *content authors* write the documentation that's rendered at some domain and path on the final instance. The content that makes up a deconst instance is brought together from many :term:`content repositories`, each of which contributes a single logical unit of documentation that can be maintained independently from all of the others. - -The domain and subpath that host the content from a specific content repository is determined by a mapping that's managed within the :term:`control repository` associated with your Deconst instance. To add a new content repository to the instance, you or a *site coordinator* will need to add an entry to the control repository's :ref:`content mapping file ` and configure a :abbr:`CI (Continuous Integration)` build. - -Once the content repository is fully configured, any changes merged into the "master" branch will automatically be live. +Deconst *content authors* write the documentation that's rendered at some domain +and path on the final instance. The content that makes up a deconst instance is +brought together from many :term:`content repositories`, each of which +contributes a single logical unit of documentation that can be maintained +independently from all of the others. + +The domain and subpath that host the content from a specific content repository +is determined by a mapping that's managed within the :term:`control repository` +associated with your Deconst instance. To add a new content repository to the +instance, you or a *site coordinator* will need to add an entry to the control +repository's :ref:`content mapping file ` and configure a :abbr:`CI +(Continuous Integration)` build. + +Once the content repository is fully configured, any changes merged into the +"master" branch will automatically be live. .. _adding-new-content-repository: @@ -15,129 +25,214 @@ Adding a New Content Repository The easiest content repositories to add to a Deconst instance are: - Written in one of the :ref:`already-supported formats `. -- Hosted in a git repository on `github.com `_, public or private. - -If your content repository does not meet those criteria, :ref:`integrating your content is still possible, but will likely take more work `. If you do qualify for the easy route, add a new content repository to Deconst by: - -#. **Ensure that the Deconst instance's GitHub account can access your repository.** If your repository is public, you don't have to do anything. If your repository is private, you'll need to grant the Deconst instance's GitHub account access before your build can be configured. Ask a Deconst administrator for the name of the bot account. - -#. **Create a "_deconst.json" file within each content root directory.** This file tells Deconst important details about the content within this directory. Place it in the same directory as your ``conf.py`` or ``_config.yml`` files. - - The most important setting within this file is the *content ID base*. The content ID base will be used to uniquely identify the content produced from this directory within the system, so it must be unique across *all* content repositories that are published to a Deconst cluster. The easiest way to accomplish this is to set the content ID base to the content repository's GitHub URL (including the trailing slash to be consistent). - - You can specify other settings within this file as well, but they're all optional. - - * ``githubUrl``: Set this to the content repository's GitHub URL. If you do, it may be used to generate "submit an issue" or "edit on GitHub" links for your content. - * ``githubBranch``: Target "edit on GitHub" links to modify content on a branch other than "master". - * ``preparer``: Set this to the name of a Docker container image that contains the preparer for this content. Generally, Deconst will automatically infer the preparer to use from the contents of the directory, but you can override it explicitly here if needed. The container name must be on a whitelist that's controlled by the cluster administrators. - * ``meta``: An object with arbitrary content that will be merged with document-specific metadata. This data will be available to :ref:`templates in the control repository ` beneath the ``meta`` key for extra customization. Check the README for the instance's control repository to see what keys have meaning for your templates. +- Hosted in a git repository on `github.com `_, public or + private. + +If your content repository does not meet those criteria, :ref:`integrating your +content is still possible, but will likely take more work +`. If you do qualify for the easy route, add a new +content repository to Deconst by: + +#. **Ensure that the Deconst instance's GitHub account can access your + repository.** If your repository is public, you don't have to do anything. If + your repository is private, you'll need to grant the Deconst instance's GitHub + account access before your build can be configured. Ask a Deconst administrator + for the name of the bot account. + +#. **Create a "_deconst.json" file within each content root directory.** This + file tells Deconst important details about the content within this directory. + Place it in the same directory as your ``conf.py`` or ``_config.yml`` files. + + The most important setting within this file is the *content ID base*. The + content ID base will be used to uniquely identify the content produced from + this directory within the system, so it must be unique across *all* content + repositories that are published to a Deconst cluster. The easiest way to + accomplish this is to set the content ID base to the content repository's + GitHub URL (including the trailing slash to be consistent). + + You can specify other settings within this file as well, but they're all + optional. + + * ``githubUrl``: Set this to the content repository's GitHub URL. If you do, + it may be used to generate "submit an issue" or "edit on GitHub" links for + your content. + + * ``githubBranch``: Target "edit on GitHub" links to modify content on a + branch other than "master". + + * ``preparer``: Set this to the name of a Docker container image that + contains the preparer for this content. Generally, Deconst will automatically + infer the preparer to use from the contents of the directory, but you can + override it explicitly here if needed. The container name must be on a + whitelist that's controlled by the cluster administrators. + + * ``meta``: An object with arbitrary content that will be merged with + document-specific metadata. This data will be available to :ref:`templates in + the control repository ` beneath the ``meta`` key for extra + customization. Check the README for the instance's control repository to see + what keys have meaning for your templates. Here's an example of the minimum possible ``_deconst.json`` file: .. code-block:: json - { - "contentIDBase": "https://github.com/deconst/deconst-docs/" - } + { + "contentIDBase": "https://github.com/deconst/deconst-docs/" + } Here's another ``_deconst.json`` example, fully populated: .. code-block:: json - { - "contentIDBase": "https://github.com/deconst/deconst-docs/", - "githubUrl": "https://github.com/deconst/deconst-docs/", - "preparer": "quay.io/deconst/preparer-sphinx", - "meta": { - "someKey": "someValue" - } - } - - One content repository can include many content root directories. Place a ``_deconst.json`` file within each one and Deconst will automatically prepare the content within each. Make sure that you give each directory a distinct content ID base! The easiest way to do this is to append a meaningful suffix to the GitHub repository URL for each one, like a version number: + { + "contentIDBase": "https://github.com/deconst/deconst-docs/", + "githubUrl": "https://github.com/deconst/deconst-docs/", + "preparer": "quay.io/deconst/preparer-sphinx", + "meta": { + "someKey": "someValue" + } + } + + One content repository can include many content root directories. Place a + ``_deconst.json`` file within each one and Deconst will automatically prepare + the content within each. Make sure that you give each directory a distinct + content ID base! The easiest way to do this is to append a meaningful suffix + to the GitHub repository URL for each one, like a version number: .. code-block:: json - { - "contentIDBase": "https://github.com/deconst/deconst-docs/v1/" - } + { + "contentIDBase": "https://github.com/deconst/deconst-docs/v1/" + } -#. **Send a pull request to the control repository to add your content repository's name to the automatic build list.** This is a file called ``content-repositories.json`` in the root directory of the control repository that looks like this: +#. **Send a pull request to the control repository to add your content + repository's name to the automatic build list.** This is a file called + ``content-repositories.json`` in the root directory of the control repository + that looks like this: .. code-block:: json - [ - { "kind": "github", "project": "deconst/deconst-docs" }, - { "kind": "github", "project": "myorg/my-content", "branches": ["current", "next"] } - ] + [ + { "kind": "github", "project": "deconst/deconst-docs" }, + { "kind": "github", "project": "myorg/my-content", "branches": ["current", "next"] } + ] - Add a new entry to the array with your project's name. Only content pushed to the branches listed by the ``branches`` setting will be deployed to production. By default, this includes only ``"master"``. + Add a new entry to the array with your project's name. Only content pushed to + the branches listed by the ``branches`` setting will be deployed to + production. By default, this includes only ``"master"``. - Once your pull request is merged, a :term:`Strider` build will be created for your content repository, and any changes that you make to your repository from this point forward will automatically be submitted to Deconst. + Once your pull request is merged, a :term:`Strider` build will be created for + your content repository, and any changes that you make to your repository + from this point forward will automatically be submitted to Deconst. -At this point, your content is being sent to Deconst, but nobody can see it yet. The next step is to work with a :ref:`coordinator ` to decide on a place your content should live in the context of the larger site. +At this point, your content is being sent to Deconst, but nobody can see it yet. +The next step is to work with a :ref:`coordinator ` to decide +on a place your content should live in the context of the larger site. Where Will Your Content Live ---------------------------- -The final output from each content repository will be presented at a subpath of the complete site. For example, if you create the following pages: +The final output from each content repository will be presented at a subpath of +the complete site. For example, if you create the following pages: .. code-block:: text - welcome - chapter-1/introduction - chapter-1/getting-started - chapter-2/more-advanced + welcome + chapter-1/introduction + chapter-1/getting-started + chapter-2/more-advanced -And you're currently mapped to the ``books/example/`` subpath of *mysite.com* by the control repository, then your pages will be available at the following URLs: +And you're currently mapped to the ``books/example/`` subpath of *mysite.com* by +the control repository, then your pages will be available at the following URLs: .. code-block:: text - https://mysite.com/books/example/welcome/ - https://mysite.com/books/example/chapter-1/introduction/ - https://mysite.com/books/example/chapter-1/getting-started/ - https://mysite.com/books/example/chapter-2/more-advanced/ + https://mysite.com/books/example/welcome/ + https://mysite.com/books/example/chapter-1/introduction/ + https://mysite.com/books/example/chapter-1/getting-started/ + https://mysite.com/books/example/chapter-2/more-advanced/ -As you work, you can freely create new pages and directories and they will automatically be available within that subpath. +As you work, you can freely create new pages and directories and they will +automatically be available within that subpath. -Content that you delete is also automatically deleted from the site. Be careful! When you rename or delete content, you may break users' existing bookmarks or links from other sites. Consider copying the content to its new path, creating a redirect, then deleting it from its old path to avoid disrupting the site's user experience. +Content that you delete is also automatically deleted from the site. Be careful! +When you rename or delete content, you may break users' existing bookmarks or +links from other sites. Consider copying the content to its new path, creating a +redirect, then deleting it from its old path to avoid disrupting the site's user +experience. -Content mapping is determined by :ref:`content mapping configuration files ` within the control repository. Open an issue on the control repository to discuss the addition of new content, or modify the content mapping files yourself in a pull request if you're also a site coordinator. +Content mapping is determined by :ref:`content mapping configuration files +` within the control repository. Open an issue on the control +repository to discuss the addition of new content, or modify the content mapping +files yourself in a pull request if you're also a site coordinator. .. _pull-request-builds: Previewing Changes ------------------ -If your content repository is using a :term:`Strider` build, each time you *open a new pull request* or *push new commits to an existing pull request*, Strider will build a preview of your work to a staging environment. Once the build is complete, a bot account will post a comment on your pull request including a link to your personal preview. +If your content repository is using a :term:`Strider` build, each time you *open +a new pull request* or *push new commits to an existing pull request*, Strider +will build a preview of your work to a staging environment. Once the build is +complete, a bot account will post a comment on your pull request including a +link to your personal preview. -While you're browsing your preview, all page links will be manipulated to keep you within your personal preview environment, so you can navigate around the full site without accidentally jumping to production. +While you're browsing your preview, all page links will be manipulated to keep +you within your personal preview environment, so you can navigate around the +full site without accidentally jumping to production. .. _custom-content-integration: Custom Content Repository Integrations -------------------------------------- -While Deconst provides automation to support content repositories that satisfy the constraints listed above, it's flexible enough to accept content from virtually anywhere. You can even submit content entirely by manually using nothing but ``curl`` if you really want to. If your content repository is different, you'll need to do more work up front. +While Deconst provides automation to support content repositories that satisfy +the constraints listed above, it's flexible enough to accept content from +virtually anywhere. You can even submit content entirely by manually using +nothing but ``curl`` if you really want to. If your content repository is +different, you'll need to do more work up front. + +Whatever is different about your content repository, you'll need two pieces of +information to begin: + +* The **content service URL** for the Deconst instance. Generally, this will be + port 9000 on a domain served by the instance, like + ``https://deconst.horse:9000``. -Whatever is different about your content repository, you'll need two pieces of information to begin: +* An **API key** issued for you by an administrator. While you can technically + use a single key for all of your content, I recommend using a distinct key for + each content repository, because it diminishes the impact of a key being revoked + and makes it easier to track activity in the logs. -* The **content service URL** for the Deconst instance. Generally, this will be port 9000 on a domain served by the instance, like ``https://deconst.horse:9000``. -* An **API key** issued for you by an administrator. While you can technically use a single key for all of your content, I recommend using a distinct key for each content repository, because it diminishes the impact of a key being revoked and makes it easier to track activity in the logs. -**If your repository is not hosted on github.com,** but is reachable from the network that the Deconst instance is running on, you'll need to create a custom Strider build. You can do this for any git-based provider by choosing the "Manual Add" option under the "Projects" tab: +**If your repository is not hosted on github.com,** but is reachable from the +network that the Deconst instance is running on, you'll need to create a custom +Strider build. You can do this for any git-based provider by choosing the +"Manual Add" option under the "Projects" tab: .. image:: /_images/strider-manual-add.jpg -**If your repository is not reachable from the network** because it's hosted behind a firewall or **if your repository is not version controlled with git**, you'll need to configure your own continuous integration solution, like `Jenkins `_. You should set it up to run the appropriate :term:`preparer` on your content repository each time new work is accepted. +**If your repository is not reachable from the network** because it's hosted +behind a firewall or **if your repository is not version controlled with git**, +you'll need to configure your own continuous integration solution, like `Jenkins +`_. You should set it up to run the appropriate +:term:`preparer` on your content repository each time new work is accepted. -**If your repository is not written in a supported content format,** you'll need to write a custom :term:`preparer`. Depending on the flexibility and architecture of the tooling, the difficulty of doing this can vary from "a few days' work by a developer" to "a lot of time". +**If your repository is not written in a supported content format,** you'll need +to write a custom :term:`preparer`. Depending on the flexibility and +architecture of the tooling, the difficulty of doing this can vary from "a few +days' work by a developer" to "a lot of time". .. _supported-formats: Supported Content Repository Formats ------------------------------------ -Each content repository can independently choose a documentation engine that makes the most sense for the content it contains. You can choose from any format that has a matching :term:`preparer`. Preparers extend the native documentation engine to support additional functionality that Deconst needs to integrate their output with the rest of the system. +Each content repository can independently choose a documentation engine that +makes the most sense for the content it contains. You can choose from any format +that has a matching :term:`preparer`. Preparers extend the native documentation +engine to support additional functionality that Deconst needs to integrate their +output with the rest of the system. .. toctree:: diff --git a/writing-docs/author/jekyll.rst b/writing-docs/author/jekyll.rst index bdc6bb0..129629e 100644 --- a/writing-docs/author/jekyll.rst +++ b/writing-docs/author/jekyll.rst @@ -1,54 +1,90 @@ Markdown content in Jekyll ========================== -`Jekyll `_ is a static site engine that's specialized for blog authoring. Although Jekyll can support content in many markup formats, it's most commonly used to render `markdown `_. +`Jekyll `_ is a static site engine that's +specialized for blog authoring. Although Jekyll can support content in +many markup formats, it's most commonly used to render `markdown +`_. Frontmatter ----------- Certain frontmatter keys have meaning to both Jekyll and Deconst. These include: - * ``title`` will be made available to site layouts as ``{{{ metadata.title }}}``. Usually, this will be used within an ``

`` element on the page and as the browser title. - * ``categories`` is a YAML list of strings used to manually identify related content. These are meant to be chosen by hand from a small, fixed list of possibilities. Site layouts should generally render categories rather than tags. - * ``tags`` is another YAML list of strings. These are more ad-hoc, but may be manipulated by the content service. Additional tags may be appended by latent semantic indexing or normalization processes. + * ``title`` will be made available to site layouts as ``{{{ + metadata.title}}}``. Usually, this will be used within an ``

`` + element on the page and as the browser title. + + * ``categories`` is a YAML list of strings used to manually identify related + content. These are meant to be chosen by hand from a small, fixed list of + possibilities. Site layouts should generally render categories rather than + tags. + + * ``tags`` is another YAML list of strings. These are more ad-hoc, but may be + manipulated by the content service. Additional tags may be appended by latent + semantic indexing or normalization processes. + * ``author`` should be set to the name of a blog post's author. + * ``bio`` may be set to a short paragraph introducing the author. - * ``date`` is used to specify the publish date of a blog post in **YYYY-mm-dd HH:MM:SS** format. - * ``disqus`` is a subdictionary used to control the inclusion of a Disqus comment field, if supported by the layout. It should have two subkeys: ``short_name``, as provided by your Disqus account, and ``mode``, which may be either ``count`` or ``embed`` to control the Disqus script injected into this page. -All of these are optional and only have meaning if the equivalent metadata attributes are used in the page's Deconst layout. + * ``date`` is used to specify the publish date of a blog post in + **YYYY-mm-dd HH:MM:SS** format. + + * ``disqus`` is a subdictionary used to control the inclusion of a + Disqus comment field, if supported by the layout. It should have + two subkeys: ``short_name``, as provided by your Disqus account, + and ``mode``, which may be either ``count`` or ``embed`` to control + the Disqus script injected into this page. + +All of these are optional and only have meaning if the equivalent +metadata attributes are used in the page's Deconst layout. Assets ------ -To properly include images or other static assets in your Jekyll content, use the `jekyll-assets plugin `_. The Jekyll preparer will hook the assets plugin and override it to properly submit assets to the content service. +To properly include images or other static assets in your Jekyll +content, use the `jekyll-assets plugin +`_. The Jekyll preparer +will hook the assets plugin and override it to properly submit assets +to the content service. + + #. Add an ``_assets/images`` directory to your Jekyll repository. + Place any image assets that you reference within this directory. - 1. Add an ``_assets/images`` directory to your Jekyll repository. Place any image assets that you reference within this directory. - 2. Reference images within a post or a page with the ``asset_path`` Liquid tag. + #. Reference images within a post or a page with the ``asset_path`` + Liquid tag. With Markdown: .. code-block:: text - ![alt text]({% asset_path image-path.png %}) + ![alt text]({% asset_path image-path.png %}) Or with raw HTML: .. code-block:: html - alt text + alt text Plugins and Dependencies ------------------------ -If you use Jekyll plugins that rely on other gems, you'll need to add a ``Gemfile`` to the root directory of your content repository to declare your dependencies. +If you use Jekyll plugins that rely on other gems, you'll need to add +a ``Gemfile`` to the root directory of your content repository to +declare your dependencies. -It isn't necessary to list either jekyll or jekyll-assets as explicit dependencies, because the preparer already includes them. If you do include them, the versions you declare will be ignored during preparation, anyway. It won't harm the build to do so, though. +It isn't necessary to list either jekyll or jekyll-assets as explicit +dependencies, because the preparer already includes them. If you do +include them, the versions you declare will be ignored during +preparation, anyway. It won't harm the build to do so, though. .. code-block:: ruby - source 'https://rubygems.org' + source 'https://rubygems.org' - gem 'stringex' + gem 'stringex' -Be sure to run ``bundle install`` to generate an equivalent ``Gemfile.lock``, to ensure that the versions of your dependencies are consistent from build to build. +Be sure to run ``bundle install`` to generate an equivalent +``Gemfile.lock``, to ensure that the versions of your dependencies are +consistent from build to build. diff --git a/writing-docs/author/sphinx.rst b/writing-docs/author/sphinx.rst index 2548e22..eb721fd 100644 --- a/writing-docs/author/sphinx.rst +++ b/writing-docs/author/sphinx.rst @@ -1,41 +1,72 @@ reStructuredText content in Sphinx ================================== -`Sphinx `_ is a documentation builder that assembles `reStructureText `_ source files into cohesive output that includes tables of contents, cross-references, and integrated navigation. +`Sphinx `_ is a documentation +builder that assembles `reStructureText +`_ source files into +cohesive output that includes tables of contents, cross-references, +and integrated navigation. -Deconst uses native Sphinx code as much as possible, which means that you can mostly use write regular Sphinx documentation, even using extensions or custom directives, without worrying too about the Deconst integration. Exceptions to this are described below. +Deconst uses native Sphinx code as much as possible, which means that +you can mostly use write regular Sphinx documentation, even using +extensions or custom directives, without worrying too about the +Deconst integration. Exceptions to this are described below. Assets ------ To integrate properly with Deconst's asset pipeline: - 1. Place any images in an `_images` directory at the top level of your Sphinx documentation. - 2. Reference images in ``.rst`` files by including a ``.. image`` macro. + #. Place any images in an `_images` directory at the top level of + your Sphinx documentation. + + #. Reference images in ``.rst`` files by including a ``.. image`` + macro. .. code-block:: rst - .. image:: /_images/deconst-initial.png + .. image:: /_images/deconst-initial.png + Tables of contents ------------------ -Native Sphinx uses `toctree directives `_ to both control overall documentation structure and flow and generate intelligent tables of contents. These still work within Deconst, but, depending on your template's needs, you may need to be aware of some special considerations. +Native Sphinx uses `toctree directives +`_ to both +control overall documentation structure and flow and generate +intelligent tables of contents. These still work within Deconst, but, +depending on your template's needs, you may need to be aware of some +special considerations. -First of all, if you wish to allow Deconst's templates to handle table of contents rendering entirely, you'll likely want to hide the tables of contents within the page content itself. To do this, add the ``:hidden:`` argument to the directive. +First of all, if you wish to allow Deconst's templates to handle table +of contents rendering entirely, you'll likely want to hide the tables +of contents within the page content itself. To do this, add the +``:hidden:`` argument to the directive. .. code-block:: rst - .. toctree:: - :hidden: - - one - two - three - -When using the ``deconst-serial`` builder, each page of output will have two relevant tables of contents: a **local** table of contents that consists of anchor links within the local page, and a **repository-wide** table of contents for the entire content repository. The repository-wide table of contents is generated by rendering just the ``.. toctree`` directive from the Sphinx master document (by default, ``index.rst``), *ignoring any :hidden: arguments encountered.* Any additional arguments, such as ``:maxdepth:``, will be respected. - -To exercise more control over the generated table of contents, create a file called ``_toc.rst``. If ``_toc.rst`` exists, it is used instead of the ``.. toctree`` directive from the master document. The ``_toc.rst`` file can contain whatever markup you wish to use as your table of contents. + .. toctree:: + :hidden: + + one + two + three + +When using the ``deconst-serial`` builder, each page of output will +have two relevant tables of contents: a **local** table of contents +that consists of anchor links within the local page, and a +**repository-wide** table of contents for the entire content +repository. The repository-wide table of contents is generated by +rendering just the ``.. toctree`` directive from the Sphinx master +document (by default, ``index.rst``), *ignoring any :hidden: arguments +encountered.* Any additional arguments, such as ``:maxdepth:``, will +be respected. + +To exercise more control over the generated table of contents, create +a file called ``_toc.rst``. If ``_toc.rst`` exists, it is used instead +of the ``.. toctree`` directive from the master document. The +``_toc.rst`` file can contain whatever markup you wish to use as your +table of contents. .. code-block:: rst @@ -55,92 +86,132 @@ To exercise more control over the generated table of contents, create a file cal Extensions ---------- -To use extensions beyond `the ones built in to Sphinx itself `_, add a ``requirements.txt`` or ``deconst-requirements.txt`` file to the directory that contains your ``conf.py`` and ``_deconst.json`` files. List the dependencies that provide the extensions you wish to use, using the `same format that pip expects `_. +To use extensions beyond `the ones built in to Sphinx itself +`_, +add a ``requirements.txt`` or ``deconst-requirements.txt`` file to the +directory that contains your ``conf.py`` and ``_deconst.json`` files. +List the dependencies that provide the extensions you wish to use, +using the `same format that pip expects +`_. -If both ``requirements.txt`` and ``deconst-requirements.txt`` are present, ``deconst-requirements.txt`` is used. This allows you to specify different dependencies for local Sphinx builds and for Deconst preparer builds if necessary. +If both ``requirements.txt`` and ``deconst-requirements.txt`` are +present, ``deconst-requirements.txt`` is used. This allows you to +specify different dependencies for local Sphinx builds and for Deconst +preparer builds if necessary. .. warning:: - Be careful if you specify a dependency on Sphinx itself. If the version you specify conflicts with the one `used by the Sphinx preparer `_, your build may break. If you wish to have a Sphinx dependency to make local Sphinx workflows easier, consider extracting other dependencies into an explicit ``deconst-requirements.txt`` file to avoid collisions. + Be careful if you specify a dependency on Sphinx itself. If the + version you specify conflicts with the one `used by the Sphinx + preparer + `_, + your build may break. If you wish to have a Sphinx dependency to + make local Sphinx workflows easier, consider extracting other + dependencies into an explicit ``deconst-requirements.txt`` file to + avoid collisions. Special per-page metadata ------------------------- -Sphinx offers page-level customization by reading `per-page metadata `_ that may be present on each page. Certain fields can be used to customize Deconst's output. +Sphinx offers page-level customization by reading `per-page metadata +`_ +that may be present on each page. Certain fields can be used to +customize Deconst's output. deconsttitle ^^^^^^^^^^^^ -If present, a ``:deconsttitle:`` field will be used as the page title within Deconst templates rather than the one that Sphinx assigns each document, which is always the top-level heading. +If present, a ``:deconsttitle:`` field will be used as the page title +within Deconst templates rather than the one that Sphinx assigns each +document, which is always the top-level heading. .. code-block:: rst - :deconsttitle: Custom Title + :deconsttitle: Custom Title - This heading will appear on the page, but not in the title - ========================================================== + This heading will appear on the page, but not in the title + ========================================================== deconstcategories ^^^^^^^^^^^^^^^^^ -Specify one or more categories to apply to an individual page with the ``:deconstcategories:`` field. The field's value is split on commas and whitespace is trimmed from each element. +Specify one or more categories to apply to an individual page with the +``:deconstcategories:`` field. The field's value is split on commas +and whitespace is trimmed from each element. .. code-block:: rst - :deconstcategories: one, two + :deconstcategories: one, two Categories redundant with repository-global ones will be deduplicated. deconstunsearchable ^^^^^^^^^^^^^^^^^^^ -Exclude a page from search results by marking it with a ``:deconstunsearchable:`` item. This *overrides* the :ref:`deconst_default_unsearchable ` repository-wide setting for this document. +Exclude a page from search results by marking it with a +``:deconstunsearchable:`` item. This *overrides* the +:ref:`deconst_default_unsearchable ` +repository-wide setting for this document. .. code-block:: rst - :deconstunsearchable: true + :deconstunsearchable: true Other metadata ^^^^^^^^^^^^^^ -Any other fields included here are available to :ref:`template authors ` within the ``deconst.content.envelope.meta`` structure. Co-ordinate with your template designers to ascribe whatever meaning to other fields that you wish! +Any other fields included here are available to :ref:`template authors +` within the ``deconst.content.envelope.meta`` +structure. Co-ordinate with your template designers to ascribe +whatever meaning to other fields that you wish! conf.py settings ---------------- -Repository-wide settings for Sphinx are managed by a ``conf.py`` file at the root of your Sphinx content. Deconst uses several custom settings within this file for its global configuration as well. +Repository-wide settings for Sphinx are managed by a ``conf.py`` file +at the root of your Sphinx content. Deconst uses several custom +settings within this file for its global configuration as well. builder ^^^^^^^ -Deconst supports two distinct **builders** that alter the way that envelopes are generated, roughly corresponding to Sphinx's serial (``make html``) and single-page (``make singlehtml``) HTML builders. The ``deconst-single`` builder assembles all content from the repository into a single page, while the ``deconst-serial`` builder creates a different page for each ``.rst`` document. +Deconst supports two distinct **builders** that alter the way that +envelopes are generated, roughly corresponding to Sphinx's serial +(``make html``) and single-page (``make singlehtml``) HTML builders. +The ``deconst-single`` builder assembles all content from the +repository into a single page, while the ``deconst-serial`` builder +creates a different page for each ``.rst`` document. -The ``deconst-serial`` builder is the default. To use the single builder instead, set the ``builder`` variable within your ``conf.py``. +The ``deconst-serial`` builder is the default. To use the single +builder instead, set the ``builder`` variable within your ``conf.py``. .. code-block:: python - builder = 'deconst-single' - # OR: - builder = 'deconst-serial' + builder = 'deconst-single' + # OR: + builder = 'deconst-serial' .. _deconst-default-unsearchable: deconst_default_unsearchable ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To exclude all envelopes within a content repository from search indexing, set ``deconst_default_unsearchable`` to ``True``: +To exclude all envelopes within a content repository from search +indexing, set ``deconst_default_unsearchable`` to ``True``: .. code-block:: python - deconst_default_unsearchable = True + deconst_default_unsearchable = True -Notice that this may still be overridden by individual envelopes with per-page metadata. +Notice that this may still be overridden by individual envelopes with +per-page metadata. deconst_categories ^^^^^^^^^^^^^^^^^^ -To apply one or more :term:`categories` to all pages within your repository, specify them as ``deconst_categories``: +To apply one or more :term:`categories` to all pages within your +repository, specify them as ``deconst_categories``: .. code-block:: python - deconst_categories = ['global category one', 'global category two'] + deconst_categories = ['global category one', 'global category two'] diff --git a/writing-docs/coordinator/control-advanced.rst b/writing-docs/coordinator/control-advanced.rst index 9dc614c..9d635d5 100644 --- a/writing-docs/coordinator/control-advanced.rst +++ b/writing-docs/coordinator/control-advanced.rst @@ -6,8 +6,14 @@ Advanced Topics Content IDs ^^^^^^^^^^^ -Strictly speaking, the way that :term:`content IDs` are assigned is an arbitrary decision made by the :term:`preparer` that's configured on that repository. However, by convention, they follow a pattern: +Strictly speaking, the way that :term:`content IDs` are assigned is an +arbitrary decision made by the :term:`preparer` that's configured on +that repository. However, by convention, they follow a pattern: *base URL of the content repository* + *subpath of the rendered page* -For example, suppose that we have a content repository hosted at https://github.com/deconst/deconst-docs that contains Sphinx documentation. A page within that repository that renders at *writing-docs/coordinator* would be assigned a content ID of ``https://github.com/deconst/deconst-docs/writing-docs/coordinator``. +For example, suppose that we have a content repository hosted at +https://github.com/deconst/deconst-docs that contains Sphinx +documentation. A page within that repository that renders at +*writing-docs/coordinator* would be assigned a content ID of +``https://github.com/deconst/deconst-docs/writing-docs/coordinator``. diff --git a/writing-docs/coordinator/index.rst b/writing-docs/coordinator/index.rst index 48bfda5..98a3292 100644 --- a/writing-docs/coordinator/index.rst +++ b/writing-docs/coordinator/index.rst @@ -3,26 +3,44 @@ Coordinating a Deconst Site =========================== -A Deconst *site coordinator* has several responsibilities, including management of the site's high-level information architecture, graphic design, and maintenance of the layouts and assets for each domain within the site. +A Deconst *site coordinator* has several responsibilities, including +management of the site's high-level information architecture, graphic +design, and maintenance of the layouts and assets for each domain +within the site. The Control Repository ---------------------- -Every deconst installation is configured to point to a single **control repository**, a version-controlled repository that's used to manage site-wide concerns. It's a GitHub repository containing mostly plain-text files that you can edit however you wish, even directly with GitHub's web editor! +Every deconst installation is configured to point to a single +**control repository**, a version-controlled repository that's used to +manage site-wide concerns. It's a GitHub repository containing mostly +plain-text files that you can edit however you wish, even directly +with GitHub's web editor! -While changes to assets will go live automatically after a short delay, changes to content or template mappings requires administrator action to take effect. +While changes to assets will go live automatically after a short +delay, changes to content or template mappings requires administrator +action to take effect. The control repository is expected to include certain contents: - * At least one :ref:`content mapping file ` that tells Deconst which content to display where. - * :ref:`Templates ` that give individual pages visual identity. - * :ref:`Template mapping files ` that specify which template should be used to render a specific page. - * :ref:`Global assets ` such as stylesheets, JavaScript files, or images that are referenced by the layout templates. + * At least one :ref:`content mapping file ` that tells + Deconst which content to display where. + + * :ref:`Templates ` that give individual pages + visual identity. + + * :ref:`Template mapping files ` that specify + which template should be used to render a specific page. + + * :ref:`Global assets ` such as stylesheets, + JavaScript files, or images that are referenced by the layout + templates. + .. toctree:: - mapping - templates - template-assets - search - control-advanced + mapping + templates + template-assets + search + control-advanced diff --git a/writing-docs/coordinator/mapping.rst b/writing-docs/coordinator/mapping.rst index e566284..1a33234 100644 --- a/writing-docs/coordinator/mapping.rst +++ b/writing-docs/coordinator/mapping.rst @@ -3,47 +3,78 @@ Content Mapping Files --------------------- -The **content mapping file** is a `JSON `_ file located at the path ``config/content.json``. It works by placing a subtree of content from a certain content repository at a subtree of a domain within the overall Deconst site. +The **content mapping file** is a `JSON `_ file +located at the path ``config/content.json``. It works by placing a +subtree of content from a certain content repository at a subtree of a +domain within the overall Deconst site. -For example, suppose we have a content repository at ``https://github.com/user/book1`` that creates the following pages: +For example, suppose we have a content repository at +``https://github.com/user/book1`` that creates the following pages: .. code-block:: text - introduction - chapter-1/getting-started - chapter-1/and-then + introduction + chapter-1/getting-started + chapter-1/and-then -And another content repository at ``https://github.com/user/book2`` that creates these pages: +And another content repository at ``https://github.com/user/book2`` +that creates these pages: .. code-block:: text - welcome - chapter-1/the-basics - chapter-1/more-detail + welcome + chapter-1/the-basics + chapter-1/more-detail -If we create mapping entries that map ``library/my-book/`` to ``https://github.com/user/book1/`` and ``library/another-book/`` to ``https://github.com/user/book2/``, both on the domain *books.horse*, these pages will be available at the following URLs: +If we create mapping entries that map ``library/my-book/`` to +``https://github.com/user/book1/`` and ``library/another-book/`` to +``https://github.com/user/book2/``, both on the domain *books.horse*, +these pages will be available at the following URLs: .. code-block:: text - https://books.horse/library/my-book/introduction/ - https://books.horse/library/my-book/chapter-1/getting-started/ - https://books.horse/library/my-book/chapter-1/and-then/ - https://books.horse/library/another-book/welcome/ - https://books.horse/library/another-book/chapter-1/the-basics/ - https://books.horse/library/another-book/chapter-1/more-detail/ + https://books.horse/library/my-book/introduction/ + https://books.horse/library/my-book/chapter-1/getting-started/ + https://books.horse/library/my-book/chapter-1/and-then/ + https://books.horse/library/another-book/welcome/ + https://books.horse/library/another-book/chapter-1/the-basics/ + https://books.horse/library/another-book/chapter-1/more-detail/ -The **longest prefix** that matches an incoming URL is used to decide which mapping is used to locate the content to render. For example, if ``/base/`` is mapped to ``https://github.com/user/base``, but ``/base/subpage/`` is mapped to ``https://github.com/user/subpage``, requests will be mapped as follows: +The **longest prefix** that matches an incoming URL is used to decide +which mapping is used to locate the content to render. For example, if +``/base/`` is mapped to ``https://github.com/user/base``, but +``/base/subpage/`` is mapped to ``https://github.com/user/subpage``, +requests will be mapped as follows: + + * **https://books.horse/base** will render + **https://github.com/user/base/**. + + * **https://books.horse/base/something** will render + **https://github.com/user/base/something**. + + * **https://books.horse/base/subpage** will render + **https://github.com/user/subpage** because the ``/base/subpage/`` + mapping now takes precendence, *even if + https://github.com/user/base/subpage exists within that content + repository.* + + * **https://books.horse/base/subpage/anything** will render + **https://github.com/user/subpage/anything**. - * **https://books.horse/base** will render **https://github.com/user/base/**. - * **https://books.horse/base/something** will render **https://github.com/user/base/something**. - * **https://books.horse/base/subpage** will render **https://github.com/user/subpage** because the ``/base/subpage/`` mapping now takes precendence, *even if https://github.com/user/base/subpage exists within that content repository.* - * **https://books.horse/base/subpage/anything** will render **https://github.com/user/subpage/anything**. .. note:: - Technically, content mappings work with the :term:`content IDs` that are produced by the :term:`preparer` that "builds" each content repository. To do more complicated mappings, it's helpful to know the :ref:`details of exactly how they're produced `, but to get started you can assume that the content repository's URL is a prefix for the content IDs of all of its content. + Technically, content mappings work with the :term:`content IDs` + that are produced by the :term:`preparer` that "builds" each + content repository. To do more complicated mappings, it's helpful + to know the :ref:`details of exactly how they're produced + `, but to get started you can assume that the + content repository's URL is a prefix for the content IDs of all of + its content. -Changes to the content mapping files will take effect as soon as they're merged into the ``master`` branch of the control repository. Huzzah for continuous delivery! +Changes to the content mapping files will take effect as soon as +they're merged into the ``master`` branch of the control repository. +Huzzah for continuous delivery! .. _control-map-syntax: @@ -54,24 +85,31 @@ The content mapping file syntax looks like this: .. code-block:: json - { - "books.horse": { - "content": { - "/": "https://github.com/user/library-welcome/", - "/library/my-book/": "https://github.com/user/book1/", - "/library/another-book/": "https://github.com/user/book-2/" - } - }, - "nextbigthing.io": { - "content": { - "/": "https://github.com/someone-else/nextbigthing-index/", - "/product/": "https://github.com/someone-else/product-sdk/" - } - } - } - -It's an error to map the exact same prefix on the same domain more than once. This is to prevent you from accidentally clobbering your own mappings by mistake! + { + "books.horse": { + "content": { + "/": "https://github.com/user/library-welcome/", + "/library/my-book/": "https://github.com/user/book1/", + "/library/another-book/": "https://github.com/user/book-2/" + } + }, + "nextbigthing.io": { + "content": { + "/": "https://github.com/someone-else/nextbigthing-index/", + "/product/": "https://github.com/someone-else/product-sdk/" + } + } + } + +It's an error to map the exact same prefix on the same domain more +than once. This is to prevent you from accidentally clobbering your +own mappings by mistake! .. note:: - End each URL prefix and each content ID prefix with a trailing slash. Deconst is smart enough to do the right thing for content at the root of each mapping: the URL **https://books.horse/library/my-book** will render the content at **https://github.com/user/book1/**, not **https://github.com/user/library-welcome/my-book**. + End each URL prefix and each content ID prefix with a trailing + slash. Deconst is smart enough to do the right thing for content at + the root of each mapping: the URL + **https://books.horse/library/my-book** will render the content at + **https://github.com/user/book1/**, not + **https://github.com/user/library-welcome/my-book**. diff --git a/writing-docs/coordinator/search.rst b/writing-docs/coordinator/search.rst index 969f563..67dd682 100644 --- a/writing-docs/coordinator/search.rst +++ b/writing-docs/coordinator/search.rst @@ -3,73 +3,95 @@ Search ------ -All content that's submitted to a Deconst instance is also indexed for search. In order to display search results in a Deconst site, you'll need to implement a search results page within your control repository. - -The search results page must be defined entirely by a Nunjucks template within your control repository. It can't be submitted from any content repository. This is because, to actually perform the search and enumerate results, you need to use a `custom Nunjucks filter `_ that's only available to control repository templates. - -First, map a search path within your :ref:`content map ` at ``config/content.json``. It doesn't need to map to an actual content ID. Instead, you'll usually want to map it to ``null`` to use a fixed, empty metadata envelope. +All content that's submitted to a Deconst instance is also indexed for +search. In order to display search results in a Deconst site, you'll +need to implement a search results page within your control +repository. + +The search results page must be defined entirely by a Nunjucks +template within your control repository. It can't be submitted from +any content repository. This is because, to actually perform the +search and enumerate results, you need to use a `custom Nunjucks +filter `_ +that's only available to control repository templates. + +First, map a search path within your :ref:`content map +` at ``config/content.json``. It doesn't need to +map to an actual content ID. Instead, you'll usually want to map it to +``null`` to use a fixed, empty metadata envelope. .. code-block:: json - { - "books.horse": { - "content": { - "/": "https://github.com/user/library-welcome/", - "/search/": null - } - } - } + { + "books.horse": { + "content": { + "/": "https://github.com/user/library-welcome/", + "/search/": null + } + } + } -Now :ref:`route ` this path to the search template in ``config/routes.json``. +Now :ref:`route ` this path to the search +template in ``config/routes.json``. .. code-block:: json - { - "books.horse": { - "routes": { - "^/": "default.html", - "^/search/?": "search.html" - } - } - } - -Finally, you'll need to :ref:`create the template ` that displays the results of a given search. Create it as you would any other template, but rather than render ``{{ deconst.content.envelope.body }}``, invoke the ``search`` filter on the query parameter: + { + "books.horse": { + "routes": { + "^/": "default.html", + "^/search/?": "search.html" + } + } + } + +Finally, you'll need to :ref:`create the template ` +that displays the results of a given search. Create it as you would +any other template, but rather than render ``{{ +deconst.content.envelope.body }}``, invoke the ``search`` filter on +the query parameter: .. code-block:: html -

Your Search Results

+

Your Search Results

- {% set r = deconst.request.query.q|search %} + {% set r = deconst.request.query.q|search %} -

Your search had {{ r.total }} results in {{ r.pages }} pages.

+

Your search had {{ r.total }} results in {{ r.pages }} pages.

- {% for result in r.results %} - -
- {{ result.title }} -

{{ result.excerpt }}

-
- {% else %} - -

No results found.

- {% endfor %} + {% for result in r.results %} + +
+ {{ result.title }} +

{{ result.excerpt }}

+
+ {% else %} + +

No results found.

+ {% endfor %} The search filter accepts optional keyword parameters: * ``pageNumber`` is the current page number, which defaults to 1. -* ``perPage`` is the number of results to include in a single page, which defaults to 10. -* ``categories`` is an array of strings that, if specified, constrain search results to envelopes with at least one matching category. + +* ``perPage`` is the number of results to include in a single page, + which defaults to 10. + +* ``categories`` is an array of strings that, if specified, constrain + search results to envelopes with at least one matching category. + .. code-block:: html - {% set query = deconst.request.query %} - {% set r = query.q|search(pageNumber=query.page, perPage=query.pageSize) %} + {% set query = deconst.request.query %} + {% set r = query.q|search(pageNumber=query.page, perPage=query.pageSize) %} -To submit searches from any page, create a form that populates the corresponding query parameters: +To submit searches from any page, create a form that populates the +corresponding query parameters: .. code-block:: html -
- - -
+
+ + +
diff --git a/writing-docs/coordinator/template-assets.rst b/writing-docs/coordinator/template-assets.rst index a5edbef..62ec5d3 100644 --- a/writing-docs/coordinator/template-assets.rst +++ b/writing-docs/coordinator/template-assets.rst @@ -3,4 +3,10 @@ Assets ------ -Raw HTML isn't very exciting on its own. To make a site look good and behave sensibly, you'll need to include assets: CSS, JavaScript, images and possibly fonts. Deconst control repositories use a `Grunt `_ plugin to implement an asset pipeline. Grunt's configuration determines the layout and capabilities of the pipeline for any specific deployment, so consult your repository's README for more information. +Raw HTML isn't very exciting on its own. To make a site look good and +behave sensibly, you'll need to include assets: CSS, JavaScript, +images and possibly fonts. Deconst control repositories use a `Grunt +`_ plugin to implement an asset pipeline. Grunt's +configuration determines the layout and capabilities of the pipeline +for any specific deployment, so consult your repository's README for +more information. diff --git a/writing-docs/coordinator/templates.rst b/writing-docs/coordinator/templates.rst index 2a64901..6537542 100644 --- a/writing-docs/coordinator/templates.rst +++ b/writing-docs/coordinator/templates.rst @@ -3,91 +3,130 @@ Templates --------- -The visual identity, navigation, and HTML boilerplate used for each page rendered by Deconst is provided by a set of *templates* that are managed within the control repository. Templates are written in `Nunjucks `_ syntax and must be placed in a subdirectory of ``templates`` named after the domain in which they're used. Template files should usually end with an ``.html`` extension. +The visual identity, navigation, and HTML boilerplate used for each +page rendered by Deconst is provided by a set of *templates* that are +managed within the control repository. Templates are written in +`Nunjucks `_ syntax and must be +placed in a subdirectory of ``templates`` named after the domain in +which they're used. Template files should usually end with an +``.html`` extension. .. _control-template-syntax: Template Syntax Extensions ^^^^^^^^^^^^^^^^^^^^^^^^^^ -There are several special helpers and variables that are made available to each template as it's rendered. Use these to indicate where content from the :term:`metadata envelope` is to be placed. +There are several special helpers and variables that are made +available to each template as it's rendered. Use these to indicate +where content from the :term:`metadata envelope` is to be placed. - * ``{{ deconst.content.envelope.body }}``: This one is very important: it'll be replaced by the actual content of the page. - * ``{{ deconst.content.envelope.title }}``: The name of the page, if one has been provided. - * ``{{ deconst.content.envelope.toc }}``: The *local* table of contents for the current page, if one is available. - * ``{{ deconst.addenda..envelope.body }}``: The body of a requested *addenda envelope*. For example, the Sphinx preparer cross references a repository-wide table of contents as ``deconst.addenda.repository_toc.envelope.body``. - * ``{{{ deconst.assets.js_xyz_url }}}``: The final https CDN URL of the JavaScript asset bundle from the "xyz" subdirectory. See :ref:`the assets section ` for more details. - * ``{{{ deconst.assets.css_xyz_url }}}``: The same thing a CSS asset bundle. - * ``{{{ deconst.assets.image_xyz_jpg_url }}}``: The asset URL for an image asset. - * ``{{{ deconst.assets.font_xyz_tff_url }}}``: The asset URL for a font asset. + * ``{{ deconst.content.envelope.body }}``: This one is very + important: it'll be replaced by the actual content of the page. -As a complete example, this set of templates provides basic HTML5 boilerplate, a common sidebar that may be shared among several templates, and a specialized template for blog posts. + * ``{{ deconst.content.envelope.title }}``: The name of the page, if + one has been provided. + + * ``{{ deconst.content.envelope.toc }}``: The *local* table of + contents for the current page, if one is available. + + * ``{{ deconst.addenda..envelope.body }}``: The body of a + requested *addenda envelope*. For example, the Sphinx preparer + cross references a repository-wide table of contents as + ``deconst.addenda.repository_toc.envelope.body``. + + * ``{{{ deconst.assets.js_xyz_url }}}``: The final https CDN URL of + the JavaScript asset bundle from the "xyz" subdirectory. See + :ref:`the assets section ` for more + details. + + * ``{{{ deconst.assets.css_xyz_url }}}``: The same thing a CSS asset + bundle. + + * ``{{{ deconst.assets.image_xyz_jpg_url }}}``: The asset URL for an + image asset. + + * ``{{{ deconst.assets.font_xyz_tff_url }}}``: The asset URL for a + font asset. + +As a complete example, this set of templates provides basic HTML5 +boilerplate, a common sidebar that may be shared among several +templates, and a specialized template for blog posts. ``templates/books.horse/_layouts/base.html`` .. code-block:: html - - - - - {{ deconst.content.envelope.title }} - - - - {% block content %}{{ deconst.content.envelope.body }}{% endblock %} + + + + + {{ deconst.content.envelope.title }} + + + + {% block content %}{{ deconst.content.envelope.body }}{% endblock %} - - - + + + ``templates/books.horse/_includes/sidebar.html`` .. code-block:: html - + ``templates/books.horse/blog-post.html`` .. code-block:: html - {% extends "_layouts/base.html" %} + {% extends "_layouts/base.html" %} - {% block content %} -
-

This is a Blog Post

+ {% block content %} +
+

This is a Blog Post

-
- {{ deconst.content.envelope.body }} -
+
+ {{ deconst.content.envelope.body }} +
- {% include "_includes/sidebar.html" %} -
- {% endblock %} + {% include "_includes/sidebar.html" %} +
+ {% endblock %} .. _control-template-map: Mapping Templates to Pages ^^^^^^^^^^^^^^^^^^^^^^^^^^ -Once you have :ref:`templates to render `, you'll need to specify which template will be used for any specific page. Deconst maps templates using a JSON **template mapping file** found within the control repository at ``config/routes.json``. The template mapping file uses regular expressions to apply templates to pages that are :ref:`currently mapped ` to any matching URL. +Once you have :ref:`templates to render `, +you'll need to specify which template will be used for any specific +page. Deconst maps templates using a JSON **template mapping file** +found within the control repository at ``config/routes.json``. The +template mapping file uses regular expressions to apply templates to +pages that are :ref:`currently mapped ` to any matching +URL. .. code-block:: json - { - "books.horse": { - "routes": { - "^/": "default.html", - "^/blog/.*": "blog-post.html" - } - } - } + { + "books.horse": { + "routes": { + "^/": "default.html", + "^/blog/.*": "blog-post.html" + } + } + } + +Templates are specified as paths relative to the site's subdirectory +of the ``templates/`` directory, so with these mappings: -Templates are specified as paths relative to the site's subdirectory of the ``templates/`` directory, so with these mappings: +#. The page ``https://books.horse/docs/info/`` will be rendered with + the template at ``templates/books.horse/default.html``. -#. The page ``https://books.horse/docs/info/`` will be rendered with the template at ``templates/books.horse/default.html``. -#. The page ``https://books.horse/blog/hello-world/`` will be rendered with the template at ``templates/books.horse/blog-post.html``. +#. The page ``https://books.horse/blog/hello-world/`` will be rendered + with the template at ``templates/books.horse/blog-post.html``. diff --git a/writing-docs/index.rst b/writing-docs/index.rst index e1be1a9..19510b5 100644 --- a/writing-docs/index.rst +++ b/writing-docs/index.rst @@ -1,10 +1,17 @@ Writing Documentation for Deconst ================================= -If you want to manage or produce content that's published as part of a deconst site, you've come to the right place! Deconst divides site contribution into two distinct roles. +If you want to manage or produce content that's published as part of a +deconst site, you've come to the right place! Deconst divides site +contribution into two distinct roles. + + * **Authors** create content for a part of the site by writing + markup. + + * **Coordinators** assemble content from many sources into a single + site. Coordinators also control the look and feel of each domain + within the site. - * **Authors** create content for a part of the site by writing markup. - * **Coordinators** assemble content from many sources into a single site. Coordinators also control the look and feel of each domain within the site. .. toctree::