Read the Docs: documentation simplified
Read the Docs tutorial
In this tutorial you will create a documentation project on Read the Docs by importing a Sphinx project from a GitHub repository, tailor its configuration, and explore several useful features of the platform.
The tutorial is aimed at people interested in learning how to use Read the Docs to host their documentation projects. You will fork a fictional software library similar to the one developed in the official Sphinx tutorial. No prior experience with Sphinx is required and you can follow this tutorial without having done the Sphinx one.
The only things you will need are a web browser, an Internet connection, and a GitHub account (you can register for a free account if you don’t have one). You will use Read the Docs Community, which means that the project will be public.
Getting started
Preparing your project on GitHub
To start, sign in to GitHub and navigate to the tutorial GitHub template, where you will see a green Use this template button. Click it to open a new page that will ask you for some details:
Leave the default “Owner”, or change it to something better for a tutorial project.
Introduce an appropriate “Repository name”, for example
rtd-tutorial
.Make sure the project is “Public”, rather than “Private”.
After that, click on the green Create repository from template button, which will generate a new repository on your personal account (or the one of your choosing). This is the repository you will import on Read the Docs, and it contains the following files:
.readthedocs.yaml
Read the Docs configuration file. Required to setup the documentation build process.
README.rst
Basic description of the repository, you will leave it untouched.
pyproject.toml
Python project metadata that makes it installable. Useful for automatic documentation generation from sources.
lumache.py
Source code of the fictional Python library.
docs/
Directory holding all the Sphinx documentation sources, including the Sphinx configuration
docs/source/conf.py
and the root documentdocs/source/index.rst
written in reStructuredText.

GitHub template for the tutorial
Sign up for Read the Docs
To sign up for a Read the Docs account, navigate to the Sign Up page and choose the option Sign up with GitHub. On the authorization page, click the green Authorize readthedocs button.

GitHub authorization page
Note
Read the Docs needs elevated permissions to perform certain operations that ensure that the workflow is as smooth as possible, like installing webhooks. If you want to learn more, check out Permissions for connected accounts.
After that, you will be redirected to Read the Docs, where you will need to confirm your e-mail and username. Clicking the Sign Up » button will create your account and redirect you to your dashboard.
By now, you should have two email notifications:
One from GitHub, telling you that “A third-party OAuth application … was recently authorized to access your account”. You don’t need to do anything about it.
Another one from Read the Docs, prompting you to “verify your email address”. Click on the link to finalize the process.
Once done, your Read the Docs account is created and ready to import your first project.
Welcome!

Read the Docs empty dashboard
Note
Our commercial site offers some extra features, like support for private projects. You can learn more about our two different sites.
First steps
Importing the project to Read the Docs
To import your GitHub project to Read the Docs, first click on the Import a Project button on your dashboard (or browse to the import page directly). You should see your GitHub account under the “Filter repositories” list on the right. If the list of repositories is empty, click the 🔄 button, and after that all your repositories will appear on the center.

Import projects workflow
Locate your rtd-tutorial
project
(possibly clicking next ›› at the bottom if you have several pages of projects),
and then click on the ➕ button to the right of the name.
The next page will ask you to fill some details about your Read the Docs project:
- Name
The name of the project. It has to be unique across all the service, so it is better if you prepend your username, for example
{username}-rtd-tutorial
.- Repository URL
The URL that contains the sources. Leave the automatically filled value.
- Default branch
Name of the default branch of the project, leave it as
main
.
After hitting the Next button, you will be redirected to the project home. You just created your first project on Read the Docs! 🎉

Project home
Checking the first build
Read the Docs will try to build the documentation of your project right after you create it. To see the build logs, click on the Your documentation is building link on the project home, or alternatively navigate to the “Builds” page, then open the one on top (the most recent one).
If the build has not finished yet by the time you open it, you will see a spinner next to a “Installing” or “Building” indicator, meaning that it is still in progress.

First successful documentation build
When the build finishes, you will see a green “Build completed” indicator, the completion date, the elapsed time, and a link to see the corresponding documentation. If you now click on View docs, you will see your documentation live!

HTML documentation live on Read the Docs
Note
Advertisement is one of our main sources of revenue. If you want to learn more about how do we fund our operations and explore options to go ad-free, check out our Sustainability page.
If you don’t see the ad, you might be using an ad blocker. Our EthicalAds network respects your privacy, doesn’t target you, and tries to be as unobstrusive as possible, so we would like to kindly ask you to not block us ❤️
Basic configuration changes
You can now proceed to make some basic configuration adjustments. Navigate back to the project page and click on the ⚙ Admin button, which will open the Settings page.
First of all, add the following text in the description:
Lumache (/lu’make/) is a Python library for cooks and food lovers that creates recipes mixing random ingredients.
Then set the project homepage to https://world.openfoodfacts.org/
,
and write food, python
in the list of tags.
All this information will be shown on your project home.
After that, configure your email so you get a notification if the build fails. To do so, click on the Notifications link on the left, type the email where you would like to get the notification, and click the Add button. After that, your email will be shown under “Existing Notifications”.
Trigger a build from a pull request
Read the Docs allows you to trigger builds from GitHub pull requests and gives you a preview of how the documentation would look like with those changes.
To enable that functionality, first click on the Settings link on the left under the ⚙ Admin menu, check the “Build pull requests for this project” checkbox, and click the Save button at the bottom of the page.
Next, navigate to your GitHub repository, locate the file docs/source/index.rst
,
and click on the ✏️ icon on the top-right with the tooltip “Edit this file”
to open a web editor (more information on their documentation).

File view on GitHub before launching the editor
In the editor, add the following sentence to the file:
Lumache has its documentation hosted on Read the Docs.
Write an appropriate commit message, and choose the “Create a new branch for this commit and start a pull request” option, typing a name for the new branch. When you are done, click the green Propose changes button, which will take you to the new pull request page, and there click the Create pull request button below the description.

Read the Docs building the pull request from GitHub
After opening the pull request, a Read the Docs check will appear indicating that it is building the documentation for that pull request. If you click on the Details link while it is building, you will access the build logs, otherwise it will take you directly to the documentation. When you are satisfied, you can merge the pull request!
Adding a configuration file
The Admin tab of the project home allows you
to change some global configuration values of your project.
In addition, you can further customize the building process
using the .readthedocs.yaml
configuration file.
This has several advantages:
The configuration lives next to your code and documentation, tracked by version control.
It can be different for every version (more on versioning in the next section).
Some configurations are only available using the config file.
This configuration file should be part of your Git repository.
It should be located in the base folder of the repository and be named .readthedocs.yaml
.
In this section, we will show you some examples of what a configuration file should contain.
Tip
Settings that apply to the entire project are controlled in the web dashboard, while settings that are version or build specific are better in the YAML file.
Changing the Python version
For example, to explicitly use Python 3.8 to build your project,
navigate to your GitHub repository, click on .readthedocs.yaml
file and then in the pencil icon ✏️ to edit the file
and change the Python version as follows:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.8"
python:
install:
- requirements: docs/requirements.txt
sphinx:
configuration: docs/source/conf.py
The purpose of each key is:
version
Mandatory, specifies version 2 of the configuration file.
build.os
Required to specify the Python version, states the name of the base image.
build.tools.python
Declares the Python version to be used.
python.install.requirements
Specifies the Python dependencies to install required to build the documentation.
After you commit these changes, go back to your project home,
navigate to the “Builds” page, and open the new build that just started.
You will notice that one of the lines contains python -mvirtualenv
:
if you click on it, you will see the full output of the corresponding command,
stating that it used Python 3.8.6 to create the virtual environment.

Read the Docs build using Python 3.8
Making warnings more visible
If you navigate to your HTML documentation, you will notice that the index page looks correct but the API section is empty. This is a very common issue with Sphinx, and the reason is stated in the build logs. On the build page you opened before, click on the View raw link on the top right, which opens the build logs in plain text, and you will see several warnings:
WARNING: [autosummary] failed to import 'lumache': no module named lumache
...
WARNING: autodoc: failed to import function 'get_random_ingredients' from module 'lumache'; the following exception was raised:
No module named 'lumache'
WARNING: autodoc: failed to import exception 'InvalidKindError' from module 'lumache'; the following exception was raised:
No module named 'lumache'
To spot these warnings more easily and allow you to address them,
you can add the sphinx.fail_on_warning
option to your Read the Docs configuration file.
For that, navigate to GitHub, locate the .readthedocs.yaml
file you created earlier,
click on the ✏️ icon, and add these contents:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.8"
python:
install:
- requirements: docs/requirements.txt
sphinx:
configuration: docs/source/conf.py
fail_on_warning: true
At this point, if you navigate back to your “Builds” page,
you will see a Failed
build, which is exactly the intended result:
the Sphinx project is not properly configured yet,
and instead of rendering an empty API page, now the build fails.
The reason sphinx.ext.autosummary
and sphinx.ext.autodoc
fail to import the code is because it is not installed.
Luckily, the .readthedocs.yaml
also allows you to specify
which requirements to install.
To install the library code of your project,
go back to editing .readthedocs.yaml
on GitHub and modify it as follows:
python:
install:
- requirements: docs/requirements.txt
# Install our python package before building the docs
- method: pip
path: .
With this change, Read the Docs will install the Python code
before starting the Sphinx build, which will finish seamlessly.
If you go now to the API page of your HTML documentation,
you will see the lumache
summary!
Enabling PDF and EPUB builds
Sphinx can build several other formats in addition to HTML, such as PDF and EPUB. You might want to enable these formats for your project so your users can read the documentation offline.
To do so, add this extra content to your .readthedocs.yaml
:
sphinx:
configuration: docs/source/conf.py
fail_on_warning: true
formats:
- pdf
- epub
After this change, PDF and EPUB downloads will be available both from the “Downloads” section of the project home, as well as the flyout menu.

Downloads available from the flyout menu
Versioning documentation
Read the Docs allows you to have several versions of your documentation,
in the same way that you have several versions of your code.
By default, it creates a latest
version
that points to the default branch of your version control system
(main
in the case of this tutorial),
and that’s why the URLs of your HTML documentation contain the string /latest/
.
Creating a new version
Let’s say you want to create a 1.0
version of your code,
with a corresponding 1.0
version of the documentation.
For that, first navigate to your GitHub repository, click on the branch selector,
type 1.0.x
, and click on “Create branch: 1.0.x from ‘main’”
(more information on their documentation).
Next, go to your project home, click on the Versions button, and under “Active Versions” you will see two entries:
The
latest
version, pointing to themain
branch.A new
stable
version, pointing to theorigin/1.0.x
branch.

List of active versions of the project
Right after you created your branch,
Read the Docs created a new special version called stable
pointing to it,
and started building it. When the build finishes,
the stable
version will be listed in the flyout menu
and your readers will be able to choose it.
Note
Read the Docs follows some rules
to decide whether to create a stable
version pointing to your new branch or tag.
To simplify, it will check if the name resembles a version number
like 1.0
, 2.0.3
or 4.x
.
Now you might want to set stable
as the default version,
rather than latest
,
so that users see the stable
documentation
when they visit the root URL of your documentation
(while still being able to change the version in the flyout menu).
For that, go to the Settings link
under the ⚙ Admin menu of your project home,
choose stable
in the “Default version*” dropdown,
and hit Save at the bottom.
Done!
Modifying versions
Both latest
and stable
are now active, which means that
they are visible for users, and new builds can be triggered for them.
In addition to these, Read the Docs also created an inactive 1.0.x
version, which will always point to the 1.0.x
branch of your repository.

List of inactive versions of the project
Let’s activate the 1.0.x
version.
For that, go to the “Versions” on your project home,
locate 1.0.x
under “Activate a version”,
and click on the Activate button.
This will take you to a new page with two checkboxes,
“Active” and “Hidden”. Check only “Active”,
and click Save.
After you do this, 1.0.x
will appear on the “Active Versions” section,
and a new build will be triggered for it.
Note
You can read more about hidden versions in our documentation.
Getting insights from your projects
Once your project is up and running, you will probably want to understand how readers are using your documentation, addressing some common questions like:
what pages are the most visited pages?
what search terms are the most frequently used?
are readers finding what they look for?
Read the Docs offers you some analytics tools to find out the answers.
Browsing traffic analytics
The How to use traffic analytics view shows the top viewed documentation pages of the past 30 days, plus a visualization of the daily views during that period. To generate some artificial views on your newly created project, you can first click around the different pages of your project, which will be accounted immediately for the current day statistics.
To see the Traffic Analytics view, go back the project page again, click on the ⚙ Admin button, and then click on the Traffic Analytics section. You will see the list of pages in descending order of visits, as well as a plot similar to the one below.

Traffic Analytics plot
Note
The Traffic Analytics view explained above gives you a simple overview of how your readers browse your documentation. It has the advantage that it stores no identifying information about your visitors, and therefore it respects their privacy. However, you might want to get more detailed data by enabling Google Analytics. Notice though that we take some extra measures to respect user privacy when they visit projects that have Google Analytics enabled, and this might reduce the number of visits counted.
Finally, you can also download this data for closer inspection. To do that, scroll to the bottom of the page and click on the Download all data button. That will prompt you to download a CSV file that you can process any way you want.
Browsing search analytics
Apart from traffic analytics, Read the Docs also offers the possibility to inspect what search terms your readers use on your documentation. This can inform decisions on what areas to reinforce, or what parts of your project are less understood or more difficult to find.
To generate some artificial search statistics on the project,
go to the HTML documentation, locate the Sphinx search box on the left,
type ingredients
, and press the Enter key.
You will be redirected to the search results page, which will show two entries.
Next, go back to the ⚙ Admin section of your project page,
and then click on the Search Analytics section.
You will see a table with the most searched queries
(including the ingredients
one you just typed),
how many results did each query return, and how many times it was searched.
Below the queries table, you will also see a visualization
of the daily number of search queries during the past 30 days.

Most searched terms
Like the Traffic Analytics, you can also download the whole dataset in CSV format by clicking on the Download all data button.
Where to go from here
This is the end of the tutorial. You started by forking a GitHub repository and importing it on Read the Docs, building its HTML documentation, and then went through a series of steps to customize the build process, tweak the project configuration, and add new versions.
Here you have some resources to continue learning about documentation and Read the Docs:
You can learn more about the functionality of the platform by going over our features page.
To make the most of the documentation generators that are supported, you can read the Sphinx tutorial or the MkDocs User Guide.
Display example projects and read the source code in Example projects.
Whether you are a documentation author, a project administrator, a developer, or a designer, you can follow our how-to guides that cover specific tasks, available under How-to guides A-Z.
For private project support and other enterprise features, you can use our commercial service (and if in doubt, check out Choosing between our two platforms).
Do you want to join a global community of fellow
documentarians
? Check out Write the Docs and its Slack workspace.Do you want to contribute to Read the Docs? We greatly appreciate it! Check out Contributing to Read the Docs.
Happy documenting!
Choosing between our two platforms
Users often ask what the differences are between Read the Docs Community and Read the Docs for Business.
While many of our features are available on both of these platforms, there are some key differences between our two platforms.
Read the Docs Community
Read the Docs Community is exclusively for hosting open source documentation. We support open source communities by providing free documentation building and hosting services, for projects of all sizes.
Important points:
Open source project hosting is always free
All documentation sites include advertising
Only supports public VCS repositories
All documentation is publicly accessible to the world
Less build time and fewer build resources (memory & CPU)
Email support included only for issues with our platform
Documentation is organized by projects
You can sign up for an account at https://readthedocs.org.
Read the Docs for Business
Read the Docs for Business is meant for companies and users who have more complex requirements for their documentation project. This can include commercial projects with private source code, projects that can only be viewed with authentication, and even large scale projects that are publicly available.
Important points:
Hosting plans require a paid subscription plan
There is no advertising on documentation sites
Allows importing private and public repositories from VCS
Supports private versions that require authentication to view
Supports team authentication, including SSO with Google, GitHub, GitLab, and Bitbucket
More build time and more build resources (memory & CPU)
Includes 24x5 email support, with 24x7 SLA support available
Documentation is organized by organization, giving more control over permissions
You can sign up for an account at https://readthedocs.com.
Questions?
If you have a question about which platform would be best, email us at support@readthedocs.org.
Getting started with Sphinx
Sphinx is a powerful documentation generator that has many great features for writing technical documentation including:
Generate web pages, printable PDFs, documents for e-readers (ePub), and more all from the same sources
You can use reStructuredText or Markdown to write documentation
An extensive system of cross-referencing code and documentation
Syntax highlighted code samples
A vibrant ecosystem of first and third-party extensions
If you want to learn more about how to create your first Sphinx project, read on. If you are interested in exploring the Read the Docs platform using an already existing Sphinx project, check out Read the Docs tutorial.
Quick start
See also
If you already have a Sphinx project, check out our Importing your documentation guide.
Assuming you have Python already, install Sphinx:
pip install sphinx
Create a directory inside your project to hold your docs:
cd /path/to/project
mkdir docs
Run sphinx-quickstart
in there:
cd docs
sphinx-quickstart
This quick start will walk you through creating the basic configuration; in most cases, you
can just accept the defaults. When it’s done, you’ll have an index.rst
, a
conf.py
and some other files. Add these to revision control.
Now, edit your index.rst
and add some information about your project.
Include as much detail as you like (refer to the reStructuredText syntax
or this template if you need help). Build them to see how they look:
make html
Your index.rst
has been built into index.html
in your documentation output directory (typically _build/html/index.html
).
Open this file in your web browser to see your docs.

Your Sphinx project is built
Edit your files and rebuild until you like what you see, then commit your changes and push to your public repository. Once you have Sphinx documentation in a public repository, you can start using Read the Docs by importing your docs.
Warning
We strongly recommend to pin the Sphinx version used for your project to build the docs to avoid potential future incompatibilities.
Using Markdown with Sphinx
You can use Markdown using MyST and reStructuredText in the same Sphinx project. We support this natively on Read the Docs, and you can do it locally:
pip install myst-parser
Then in your conf.py
:
extensions = ["myst_parser"]
You can now continue writing your docs in .md
files and it will work with Sphinx.
Read the Getting started with MyST in Sphinx docs for additional instructions.
Get inspired!
You might learn more and find the first ingredients for starting your own documentation project by looking at Example projects - view live example renditions and copy & paste from the accompanying source code.
External resources
Here are some external resources to help you learn more about Sphinx.
Getting started with MkDocs
MkDocs is a documentation generator that focuses on speed and simplicity. It has many great features including:
Preview your documentation as you write it
Easy customization with themes and extensions
Writing documentation with Markdown
Note
MkDocs is a great choice for building technical documentation. However, Read the Docs also supports Sphinx, another tool for writing and building documentation.
Quick start
See also
If you already have a Mkdocs project, check out our Importing your documentation guide.
Assuming you have Python already, install MkDocs:
pip install mkdocs
Setup your MkDocs project:
mkdocs new .
This command creates mkdocs.yml
which holds your MkDocs configuration,
and docs/index.md
which is the Markdown file
that is the entry point for your documentation.
You can edit this index.md
file to add more details about your project
and then you can build your documentation:
mkdocs serve
This command builds your Markdown files into HTML and starts a development server to browse your documentation. Open up http://127.0.0.1:8000/ in your web browser to see your documentation. You can make changes to your Markdown files and your docs will automatically rebuild.

Your MkDocs project is built
Once you have your documentation in a public repository such as GitHub, Bitbucket, or GitLab, you can start using Read the Docs by importing your docs.
Warning
We strongly recommend to pin the MkDocs version used for your project to build the docs to avoid potential future incompatibilities.
Get inspired!
You might learn more and find the first ingredients for starting your own documentation project by looking at Example projects - view live example renditions and copy & paste from the accompanying source code.
External resources
Here are some external resources to help you learn more about MkDocs.
Importing your documentation
To import a public documentation repository, visit your Read the Docs dashboard and click Import. For private repositories, please use Read the Docs for Business.
Automatically import your docs
If you have connected your Read the Docs account to GitHub, Bitbucket, or GitLab, you will see a list of your repositories that we are able to import. To import one of these projects, just click the import icon next to the repository you’d like to import. This will bring up a form that is already filled with your project’s information. Feel free to edit any of these properties, and then click Next to build your documentation.

Importing a repository
Manually import your docs
If you have not connected a Git provider account, you will need to select Import Manually and enter the information for your repository yourself. You will also need to manually configure the webhook for your repository as well. When importing your project, you will be asked for the repository URL, along with some other information for your new project. The URL is normally the URL or path name you’d use to checkout, clone, or branch your repository. Some examples:
Git:
https://github.com/ericholscher/django-kong.git
Mercurial:
https://bitbucket.org/ianb/pip
Subversion:
http://varnish-cache.org/svn/trunk
Bazaar:
lp:pasta
Add an optional homepage URL and some tags, and then click Next.
Once your project is created, you’ll need to manually configure the repository webhook if you would like to have new changes trigger builds for your project on Read the Docs. Go to your project’s Admin > Integrations page to configure a new webhook.
See also
- How to manually configure a Git repository integration
Once you have imported your git project, use this guide to manually set up basic and additional webhook integration.
Note
The Admin
page can be found at https://readthedocs.org/dashboard/<project-slug>/edit/
.
You can access all of the project settings from the admin page sidebar.

Building your documentation
Within a few seconds of completing the import process, your code will automatically be fetched from your repository, and the documentation will be built. Check out our Build process overview page to learn more about how Read the Docs builds your docs, and to troubleshoot any issues that arise.
We require an additional configuration file to build your project.
This allows you to specifying special requirements for your build,
such as your version of Python or how you wish to install addition Python requirements.
You can configure these settings in a .readthedocs.yaml
file.
See our Configuration file overview docs for more details.
Note
Using a configuration file is required from September 2023.
It is also important to note that the default version of Sphinx is v1.8.5
.
We recommend to set the version your project uses explicitily with pinned dependencies.
Read the Docs will host multiple versions of your code. You can read more about how to use this well on our Versions page.
If you have any more trouble, don’t hesitate to reach out to us. The Site support page has more information on getting in touch.
Example projects
Need inspiration?
Want to bootstrap a new documentation project?
Want to showcase your own solution?
The following example projects show a rich variety of uses of Read the Docs. You can use them for inspiration, for learning and for recipes to start your own documentation projects. View the rendered version of each project and then head over to the Git source to see how it’s done and reuse the code.
Sphinx and MkDocs examples
Topic |
Framework |
Links |
Description |
---|---|---|---|
Basic Sphinx |
Sphinx |
Sphinx example with versioning and Python doc autogeneration |
|
Basic MkDocs |
MkDocs |
Basic example of using MkDocs |
|
Jupyter Book |
Jupyter Book and Sphinx |
Jupyter Book with popular integrations configured |
|
Basic AsciiDoc |
Antora |
Antora with asciidoctor-kroki extension configured for AsciiDoc and Diagram as Code. |
Real-life examples
We maintain an Awesome List where you can contribute new shiny examples of using Read the Docs. Please refer to the instructions on how to submit new entries on Awesome Read the Docs Projects.
Contributing an example project
We would love to add more examples that showcase features of Read the Docs or great tools or methods to build documentation projects.
We require that an example project:
is hosted and maintained by you in its own Git repository,
example-<topic>
.contains a README.
uses a
.readthedocs.yaml
configuration.is added to the above list by opening a PR targeting examples.rst.
We recommend that your project:
has continuous integration and PR builds.
is versioned as a real software project, i.e. using git tags.
covers your most important scenarios, but references external real-life projects whenever possible.
has a minimal tech stack – or whatever you feel comfortable about maintaining.
copies from an existing example project as a template to get started.
We’re excited to see what you come up with!
Configuration file overview
As part of the initial set up for your Read the Docs site,
you need to create a configuration file called .readthedocs.yaml
.
The configuration file tells Read the Docs what specific settings to use for your project.
This tutorial covers:
Where to put your configuration file.
What to put in the configuration file.
How to customize the configuration for your project.
See also
- Read the Docs tutorial.
Following the steps in our tutorial will help you setup your first documentation project.
Where to put your configuration file
The .readthedocs.yaml
file should be placed in the top-most directory of your project’s repository.
We will get to the contents of the file in the next steps.
When you have changed the configuration file, you need to commit and push the changes to your Git repository. Read the Docs will then automatically find and use the configuration to build your project.
Note
The Read the Docs configuration file is a YAML file. YAML is a human-friendly data serialization language for all programming languages. To learn more about the structure of these files, see the YAML language overview.
Getting started with a template
Here are some configuration file examples to help you get started.
Pick an example based on the tool that your project is using,
copy its contents to .readthedocs.yaml
and add the file to your Git repository.
If your project uses Sphinx, we offer a special builder optimized for Sphinx projects.
1# Read the Docs configuration file for Sphinx projects
2# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
3
4# Required
5version: 2
6
7# Set the OS, Python version and other tools you might need
8build:
9 os: ubuntu-22.04
10 tools:
11 python: "3.12"
12 # You can also specify other tool versions:
13 # nodejs: "20"
14 # rust: "1.70"
15 # golang: "1.20"
16
17# Build documentation in the "docs/" directory with Sphinx
18sphinx:
19 configuration: docs/conf.py
20 # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs
21 # builder: "dirhtml"
22 # Fail on all warnings to avoid broken references
23 # fail_on_warning: true
24
25# Optionally build your docs in additional formats such as PDF and ePub
26# formats:
27# - pdf
28# - epub
29
30# Optional but recommended, declare the Python requirements required
31# to build your documentation
32# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
33# python:
34# install:
35# - requirements: docs/requirements.txt
If your project uses MkDocs, we offer a special builder optimized for MkDocs projects.
1# Read the Docs configuration file for MkDocs projects
2# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
3
4# Required
5version: 2
6
7# Set the version of Python and other tools you might need
8build:
9 os: ubuntu-22.04
10 tools:
11 python: "3.12"
12
13mkdocs:
14 configuration: mkdocs.yml
15
16# Optionally declare the Python requirements required to build your docs
17python:
18 install:
19 - requirements: docs/requirements.txt
Editing the template
Now that you have a .readthedocs.yaml
file added to your Git repository,
you should see Read the Docs trying to build your project with the configuration file.
The configuration file probably needs some adjustments to accommodate exactly your project setup.
Note
If you added the configuration file in a separate branch, you may have to activate a version for that branch.
If you have added the file in a pull request, you should enable pull request builds.
Skip: file header and comments
There are some parts of the templates that you can leave in place:
- Comments
We added comments that explain the configuration options and optional features. These lines begin with a
#
.- Commented out features
We use the
#
in front of some popular configuration options. They are there as examples, which you can choose to enable, delete or save for later.version
keyThe version key tells the system how to read the rest of the configuration file. The current and only supported version is version 2.
Adjust: build.os
In our examples, we are using Read the Docs’ custom image based on the latest Ubuntu release. Package versions in these images will not change drastically, though will receive periodic security updates.
You should pay attention to this field if your project needs to build on an older version of Ubuntu, or in the future when you need features from a newer Ubuntu.
See also
- build.os
Configuration file reference with all values possible for
build.os
.
Adjust: Python configuration
If you are using Python in your builds,
you should define the Python version in build.tools.python
.
The python
key contains a list of sub-keys,
specifying the requirements to install.
Use
python.install.package
to install the project itself as a Python package using pipUse
python.install.requirements
to install packages from a requirements fileUse
build.jobs
to install packages using Poetry or PDM
See also
- build.tools.python
Configuration file reference with all Python versions available for
build.tools.python
.- python
Configuration file reference for configuring the Python environment activated by
build.tools.python
.
Adjust: Sphinx and MkDocs version
If you are using either the sphinx
or mkdocs
builder,
then Sphinx or MkDocs will be installed automatically in its latest version.
But we recommend that you specify the version that your documentation project uses.
The requirements
key is a file path that points to a text (.txt
) file
that lists the Python packages you want Read the Docs to install.
See also
- Use a requirements file for Python dependencies
This guide explains how to specify Python requirements, such as the version of Sphinx or MkDocs.
- sphinx
Configuration file reference for configuring the Sphinx builder.
- mkdocs
Configuration file reference for configuring the MkDocs builder.
Next steps
There are more configuration options that the ones mentioned in this guide.
After you add a configuration file your Git repository, and you can see that Read the Docs is building your documentation using the file, you should have a look at the complete configuration file reference for options that might apply to your project.
See also
- Configuration file reference.
The complete list of all possible
.readthedocs.yaml
settings, including the optional settings not covered in on this page.- Build process customization
Are familiar with running a command line? Perhaps there are special commands that you know you want Read the Docs to run. Read this guide and learn more about how you add your own commands to
.readthedocs.yaml
.
Configuration file reference
Read the Docs supports configuring your documentation builds with a configuration file.
This file is named .readthedocs.yaml
and should be placed in the top level of your Git repository.
The .readthedocs.yaml
file can contain a number of settings that are not accessible through the Read the Docs website.
Because the file is stored in Git, the configuration will apply to the exact version that is being built. This allows you to store different configurations for different versions of your documentation.
Below is an example YAML file which shows the most common configuration options:
1# Read the Docs configuration file for Sphinx projects
2# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
3
4# Required
5version: 2
6
7# Set the OS, Python version and other tools you might need
8build:
9 os: ubuntu-22.04
10 tools:
11 python: "3.12"
12 # You can also specify other tool versions:
13 # nodejs: "20"
14 # rust: "1.70"
15 # golang: "1.20"
16
17# Build documentation in the "docs/" directory with Sphinx
18sphinx:
19 configuration: docs/conf.py
20 # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs
21 # builder: "dirhtml"
22 # Fail on all warnings to avoid broken references
23 # fail_on_warning: true
24
25# Optionally build your docs in additional formats such as PDF and ePub
26# formats:
27# - pdf
28# - epub
29
30# Optional but recommended, declare the Python requirements required
31# to build your documentation
32# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
33# python:
34# install:
35# - requirements: docs/requirements.txt
1# Read the Docs configuration file for MkDocs projects
2# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
3
4# Required
5version: 2
6
7# Set the version of Python and other tools you might need
8build:
9 os: ubuntu-22.04
10 tools:
11 python: "3.12"
12
13mkdocs:
14 configuration: mkdocs.yml
15
16# Optionally declare the Python requirements required to build your docs
17python:
18 install:
19 - requirements: docs/requirements.txt
See also
- Configuration file overview
Practical steps to add a configuration file to your documentation project.
Supported settings
Read the Docs validates every configuration file. Any configuration option that isn’t supported will make the build fail. This is to avoid typos and provide feedback on invalid configurations.
Warning
When using a v2 configuration file, the local settings from the web interface are ignored.
version
- Required:
true
Example:
version: 2
formats
Additional formats of the documentation to be built, apart from the default HTML.
- Type:
list
- Options:
htmlzip
,pdf
,epub
,all
- Default:
[]
Example:
version: 2
# Default
formats: []
version: 2
# Build PDF & ePub
formats:
- epub
- pdf
Note
You can use the all
keyword to indicate all formats.
version: 2
# Build all formats
formats: all
Warning
At the moment, only Sphinx supports additional formats.
pdf
, epub
, and htmlzip
output is not yet supported when using MkDocs.
With builds from pull requests, only HTML formats are generated. Other formats are resource intensive and will be built after merging.
python
Configuration of the Python environment to be used.
version: 2
python:
install:
- requirements: docs/requirements.txt
- method: pip
path: .
extra_requirements:
- docs
- method: pip
path: another/package
python.install
List of installation methods of packages and requirements. You can have several of the following methods.
- Type:
list
- Default:
[]
Requirements file
Install packages from a requirements file.
The path to the requirements file, relative to the root of the project.
- Key:
requirements
- Type:
path
- Required:
false
Example:
version: 2
python:
install:
- requirements: docs/requirements.txt
- requirements: requirements.txt
Warning
If you are using a Conda environment to
manage the build, this setting will not have any effect. Instead
add the extra requirements to the environment
file of Conda.
Packages
Install the project using pip install
(recommended) or python setup.py install
(deprecated).
The path to the package, relative to the root of the project.
- Key:
path
- Type:
path
- Required:
false
The installation method.
- Key:
method
- Options:
pip
,setuptools
(deprecated)- Default:
pip
Extra requirements section to install in addition to the package dependencies.
Warning
You need to install your project with pip
to use extra_requirements
.
- Key:
extra_requirements
- Type:
list
- Default:
[]
Example:
version: 2
python:
install:
- method: pip
path: .
extra_requirements:
- docs
With the previous settings, Read the Docs will execute the next commands:
pip install .[docs]
conda
Configuration for Conda support.
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "mambaforge-22.9"
conda:
environment: environment.yml
conda.environment
The path to the Conda environment file, relative to the root of the project.
- Type:
path
- Required:
false
Note
When using Conda, it’s required to specify build.tools.python
to tell Read the Docs to use whether Conda or Mamba to create the environment.
build
Configuration for the documentation build process. This allows you to specify the base Read the Docs image used to build the documentation, and control the versions of several tools: Python, Node.js, Rust, and Go.
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.12"
nodejs: "18"
rust: "1.64"
golang: "1.19"
build.os
The Docker image used for building the docs. Image names refer to the operating system Read the Docs uses to build them.
Note
Arbitrary Docker images are not supported.
- Type:
string
- Options:
ubuntu-20.04
,ubuntu-22.04
,ubuntu-lts-latest
- Required:
true
Note
The ubuntu-lts-latest
option refers to the latest Ubuntu LTS version of Ubuntu available on Read the Docs,
which may not match the latest Ubuntu LTS officially released.
Warning
Using ubuntu-lts-latest
may break your builds unexpectedly if your project isn’t compatible with the newest Ubuntu LTS version when it’s updated by Read the Docs.
build.tools
Version specifiers for each tool. It must contain at least one tool.
- Type:
dict
- Options:
python
,nodejs
,ruby
,rust
,golang
- Required:
true
Note
Each tool has a latest
option available, which refers to the latest version available on Read the Docs,
which may not match the latest version officially released.
Versions and the latest
option are updated at least once every six months to keep up with the latest releases.
Warning
Using latest
may break your builds unexpectedly if your project isn’t compatible with the newest version of the tool when it’s updated by Read the Docs.
build.tools.python
Python version to use. You can use several interpreters and versions, from CPython, Miniconda, and Mamba.
Note
If you use Miniconda3 or Mambaforge, you can select the Python version
using the environment.yml
file. See our How to use Conda as your Python environment guide
for more information.
- Type:
string
- Options:
2.7
3
(alias for the latest 3.x version available on Read the Docs)3.6
3.7
3.8
3.9
3.10
3.11
3.12
latest
(alias for the latest version available on Read the Docs)miniconda3-4.7
miniconda-latest
(alias for the latest version available on Read the Docs)mambaforge-4.10
mambaforge-22.9
mambaforge-latest
(alias for the latest version available on Read the Docs)
build.tools.nodejs
Node.js version to use.
- Type:
string
- Options:
14
16
18
19
20
latest
(alias for the latest version available on Read the Docs)
build.tools.ruby
Ruby version to use.
- Type:
string
- Options:
3.3
latest
(alias for the latest version available on Read the Docs)
build.tools.rust
Rust version to use.
- Type:
string
- Options:
1.55
1.61
1.64
1.70
1.75
latest
(alias for the latest version available on Read the Docs)
build.tools.golang
Go version to use.
- Type:
string
- Options:
1.17
1.18
1.19
1.20
1.21
latest
(alias for the latest version available on Read the Docs)
build.apt_packages
List of APT packages to install. Our build servers run various Ubuntu LTS versions with the default set of package repositories installed. We don’t currently support PPA’s or other custom repositories.
- Type:
list
- Default:
[]
version: 2
build:
apt_packages:
- libclang
- cmake
Note
When possible avoid installing Python packages using apt (python3-numpy
for example),
use pip or conda instead.
Warning
Currently, it’s not possible to use this option when using build.commands.
build.jobs
Commands to be run before or after a Read the Docs pre-defined build jobs. This allows you to run custom commands at a particular moment in the build process. See Build process customization for more details.
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.12"
jobs:
pre_create_environment:
- echo "Command run at 'pre_create_environment' step"
post_build:
- echo "Command run at 'post_build' step"
- echo `date`
Note
Each key under build.jobs
must be a list of strings.
build.os
and build.tools
are also required to use build.jobs
.
- Type:
dict
- Allowed keys:
post_checkout
,pre_system_dependencies
,post_system_dependencies
,pre_create_environment
,post_create_environment
,pre_install
,post_install
,pre_build
,post_build
- Required:
false
- Default:
{}
build.commands
Specify a list of commands that Read the Docs will run on the build process.
When build.commands
is used, none of the pre-defined build jobs will be executed.
(see Build process customization for more details).
This allows you to run custom commands and control the build process completely.
The $READTHEDOCS_OUTPUT/html
directory will be uploaded and hosted by Read the Docs.
Warning
This feature is in a beta phase and could suffer incompatible changes or even removed completely in the near feature.
We are currently testing the new addons integrations we are building
on projects using build.commands
configuration key.
Use it under your own responsibility.
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.12"
commands:
- pip install pelican
- pelican --settings docs/pelicanconf.py --output $READTHEDOCS_OUTPUT/html/ docs/
Note
build.os
and build.tools
are also required when using build.commands
.
- Type:
list
- Required:
false
- Default:
[]
sphinx
Configuration for Sphinx documentation (this is the default documentation type).
version: 2
sphinx:
builder: html
configuration: conf.py
fail_on_warning: true
Note
If you want to pin Sphinx to a specific version,
use a requirements.txt
or environment.yml
file
(see Requirements file and conda.environment).
If you are using a metadata file to describe code dependencies
like setup.py
, pyproject.toml
, or similar,
you can use the extra_requirements
option
(see Packages).
This also allows you to override the default pinning done by Read the Docs
if your project was created before October 2020.
sphinx.builder
The builder type for the Sphinx documentation.
- Type:
string
- Options:
html
,dirhtml
,singlehtml
- Default:
html
Note
The htmldir
builder option was renamed to dirhtml
to use the same name as sphinx.
Configurations using the old name will continue working.
sphinx.configuration
The path to the conf.py
file, relative to the root of the project.
- Type:
path
- Default:
null
If the value is null
,
Read the Docs will try to find a conf.py
file in your project.
sphinx.fail_on_warning
Turn warnings into errors
(-W
and --keep-going
options).
This means the build fails if there is a warning and exits with exit status 1.
- Type:
bool
- Default:
false
mkdocs
Configuration for MkDocs documentation.
version: 2
mkdocs:
configuration: mkdocs.yml
fail_on_warning: false
Note
If you want to pin MkDocs to a specific version,
use a requirements.txt
or environment.yml
file
(see Requirements file and conda.environment).
If you are using a metadata file to describe code dependencies
like setup.py
, pyproject.toml
, or similar,
you can use the extra_requirements
option
(see Packages).
This also allows you to override the default pinning done by Read the Docs
if your project was created before March 2021.
mkdocs.configuration
The path to the mkdocs.yml
file, relative to the root of the project.
- Type:
path
- Default:
null
If the value is null
,
Read the Docs will try to find a mkdocs.yml
file in your project.
mkdocs.fail_on_warning
Turn warnings into errors. This means that the build stops at the first warning and exits with exit status 1.
- Type:
bool
- Default:
false
submodules
VCS submodules configuration.
Note
Only Git is supported at the moment.
Warning
You can’t use include
and exclude
settings for submodules at the same time.
version: 2
submodules:
include:
- one
- two
recursive: true
submodules.include
List of submodules to be included.
- Type:
list
- Default:
[]
Note
You can use the all
keyword to include all submodules.
version: 2
submodules:
include: all
submodules.exclude
List of submodules to be excluded.
- Type:
list
- Default:
[]
Note
You can use the all
keyword to exclude all submodules.
This is the same as include: []
.
version: 2
submodules:
exclude: all
submodules.recursive
Do a recursive clone of the submodules.
- Type:
bool
- Default:
false
Note
This is ignored if there aren’t submodules to clone.
search
Settings for more control over Server side search.
version: 2
search:
ranking:
api/v1/*: -1
api/v2/*: 4
ignore:
- 404.html
search.ranking
Set a custom search rank over pages matching a pattern.
- Type:
map
of patterns to ranks- Default:
{}
Patterns are matched against the relative paths of the HTML files produced by the build,
you should try to match index.html
, not docs/index.rst
, nor /en/latest/index.html
.
Patterns can include one or more of the following special characters:
*
matches everything, including slashes.?
matches any single character.[seq]
matches any character inseq
.
The rank can be an integer number between -10 and 10 (inclusive). Pages with a rank closer to -10 will appear further down the list of results, and pages with a rank closer to 10 will appear higher in the list of results. Note that 0 means normal rank, not no rank.
If you are looking to completely ignore a page, check search.ignore.
version: 2
search:
ranking:
# Match a single file
tutorial.html: 2
# Match all files under the api/v1 directory
api/v1/*: -5
# Match all files named guides.html,
# two patterns are needed to match both the root and nested files.
'guides.html': 3
'*/guides.html': 3
Note
The final rank will be the last pattern to match the page.
Tip
Is better to decrease the rank of pages you want to deprecate, rather than increasing the rank of the other pages.
search.ignore
List of paths to ignore and exclude from the search index. Paths matched will not be included in search results.
- Type:
list
of patterns- Default:
['search.html', 'search/index.html', '404.html', '404/index.html']
Patterns are matched against the relative paths of the HTML files produced by the build,
you should try to match index.html
, not docs/index.rst
, nor /en/latest/index.html
.
Patterns can include one or more of the following special characters:
*
matches everything (including slashes).?
matches any single character.[seq]
matches any character inseq
.
version: 2
search:
ignore:
# Ignore a single file in the root of the output directory
- 404.html
# Ignore all files under the search/ directory
- search/*
# Ignore all files named ref.html,
# two patterns are needed to match both the root and nested files.
- 'ref.html'
- '*/ref.html'
version: 2
search:
ignore:
# Custom files to ignore
- file.html
- api/v1/*
# Defaults
- search.html
- search/index.html
- 404.html
- 404/index.html'
Note
Since Read the Docs fallbacks to the original search engine when no results are found, you may still see search results from ignored pages.
Schema
You can see the complete schema here. This schema is available at Schema Store, use it with your favorite editor for validation and autocompletion.
Automation rules
Automation rules allow project maintainers to automate actions on new branches and tags in Git repositories. If you are familiar with GitOps, this might seem familiar. The goal of automation rules is to be able to control versioning through your Git repository and avoid duplicating these efforts on Read the Docs.
See also
- How to manage versions automatically
A practical guide to managing automated versioning of your documentation.
- Versions
General explanation of how versioning works on Read the Docs
How automation rules work
When a new tag or branch is pushed to your repository, Read the Docs receives a webhook. We then create a new Read the Docs version that matches your new Git tag or branch.
All automation rules are evaluated for this version, in the order they are listed. If the version matches the version type and the pattern in the rule, the specified action is performed on that version.
Note
Versions can match multiple automation rules, and all matching actions will be performed on the version.
Matching a version in Git
We have a couple predefined ways to match against versions that are created, and you can also define your own.
Predefined matches
Automation rules support two predefined version matches:
Any version: All new versions will match the rule.
SemVer versions: All new versions that follow semantic versioning will match the rule.
Custom matches
If none of the above predefined matches meet your use case, you can use a Custom match.
The custom match should be a valid Python regular expression. Each new version will be tested against this regular expression.
Actions for versions
When an automation rule matches a new version, the specified action is performed on that version. Currently, the following actions are available:
- Activate version
Activates and builds the version.
- Hide version
Hides the version. If the version is not active, activates it and builds the version. See Version states.
- Make version public
Sets the version’s privacy level to public. See Privacy levels.
- Make version private
Sets the version’s privacy level to private. See Privacy levels.
- Set version as default
Sets the version as the default version. It also activates and builds the version. See Root URL redirect at /.
- Delete version
When a branch or tag is deleted from your repository, Read the Docs will delete it only if isn’t active. This action allows you to delete active versions when a branch or tag is deleted from your repository.
There are a couple caveats to these rules that are useful:
The default version isn’t deleted even if it matches a rule. You can use the
Set version as default
action to change the default version before deleting the current one.If your versions follow PEP 440, Read the Docs activates and builds the version if it’s greater than the current stable version. The stable version is also automatically updated at the same time. See more in Versions.
Order
When a new Read the Docs version is created, all rules with a successful match will have their action triggered, in the order they appear on the Automation Rules page.
Examples
Activate only new branches that belong to the 1.x
release
Custom match:
^1\.\d+$
Version type:
Branch
Action:
Activate version
Delete an active version when a branch is deleted
Match:
Any version
Version type:
Branch
Action:
Delete version
How to create reproducible builds
Your documentation depends on a number of dependencies to be built. If your docs don’t have reproducible builds, an update in a dependency can break your builds when least expected, or make your docs look different from your local version. This guide will help you to keep your builds working over time, so that you can focus on content.
Use a .readthedocs.yaml
configuration file
We recommend using a configuration file to manage your documentation. Our config file provides you per version settings, and those settings live in your Git repository.
This allows you to validate changes using pull requests, and ensures that all your versions can be rebuilt from a reproducible configuration.
Use a requirements file for Python dependencies
We recommend using a Pip requirements file or Conda environment file to pin Python dependencies. This ensures that top-level dependencies and extensions don’t change.
A configuration file with explicit dependencies looks like this:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.12"
# Build from the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
# Explicitly set the version of Python and its requirements
python:
install:
- requirements: docs/requirements.txt
# Defining the exact version will make sure things don't break
sphinx==5.3.0
sphinx_rtd_theme==1.1.1
readthedocs-sphinx-search==0.1.1
Tip
Remember to update your docs’ dependencies from time to time to get new improvements and fixes. It also makes it easy to manage in case a version reaches its end of support date.
Pin your transitive dependencies
Once you have pinned your own dependencies, the next things to worry about are the dependencies of your dependencies. These are called transitive dependencies, and they can upgrade without warning if you do not pin these packages as well.
We recommend pip-tools to help address this problem.
It allows you to specify a requirements.in
file with your top-level dependencies,
and it generates a requirements.txt
file with the full set of transitive dependencies.
- ✅ Good:
All your transitive dependencies are defined, which ensures new package releases will not break your docs.
docs/requirements.insphinx==5.3.0
docs/requirements.txt# # This file is autogenerated by pip-compile with Python 3.10 # by the following command: # # pip-compile docs.in # alabaster==0.7.12 # via sphinx babel==2.11.0 # via sphinx certifi==2022.12.7 # via requests charset-normalizer==2.1.1 # via requests docutils==0.19 # via sphinx idna==3.4 # via requests imagesize==1.4.1 # via sphinx jinja2==3.1.2 # via sphinx markupsafe==2.1.1 # via jinja2 packaging==22.0 # via sphinx pygments==2.13.0 # via sphinx pytz==2022.7 # via babel requests==2.28.1 # via sphinx snowballstemmer==2.2.0 # via sphinx sphinx==5.3.0 # via -r docs.in sphinxcontrib-applehelp==1.0.2 # via sphinx sphinxcontrib-devhelp==1.0.2 # via sphinx sphinxcontrib-htmlhelp==2.0.0 # via sphinx sphinxcontrib-jsmath==1.0.1 # via sphinx sphinxcontrib-qthelp==1.0.3 # via sphinx sphinxcontrib-serializinghtml==1.1.5 # via sphinx urllib3==1.26.13 # via requests
Check list ✅
If you followed this guide, you have pinned:
tool versions (Python, Node)
top-level dependencies (Sphinx, Sphinx extensions)
transitive dependencies (Pytz, Jinja2)
This will protect your builds from failures because of a random tool or dependency update.
You do still need to upgrade your dependencies from time to time, but you should do that on your own schedule.
See also
- Configuration file reference
Configuration file reference
- Build process overview
Build process information
- Build process customization
Customizing builds to do more
Build process overview
Once a project has been imported and a build is triggered, Read the Docs executes a set of pre-defined jobs to build and upload documentation. This page explains in detail what happens behind the scenes, and includes an overview of how you can change this process.
Understanding the build process
Understanding how your content is built helps with debugging any problems you might hit. It also gives you the knowledge to customize the build process.
Note
All the steps are run inside a Docker container, using the image defined in build.os. The build has access to all pre-defined environment variables and custom environment variables.
The build process includes the following jobs:
- checkout:
Checks out a project’s code from the repository URL. On Read the Docs for Business, this environment includes the SSH deploy key that gives access to the repository.
- system_dependencies:
Installs operating system and runtime dependencies. This includes specific versions of a language (e.g. Python, Node.js, Go, Rust) and also
apt
packages.build.tools can be used to define a language version, and build.apt_packages to define
apt
packages.- create_environment:
Creates a Python environment to install all the dependencies in an isolated and reproducible way. Depending on what’s defined by the project, a virtualenv or a conda environment (conda) will be used.
- install:
Install default and project dependencies. This includes any requirements you have configured in Requirements file.
If the project has extra Python requirements, python.install can be used to specify them.
Tip
We strongly recommend pinning all the versions required to build the documentation to avoid unexpected build errors.
- build:
Runs the main command to build the documentation for each of the formats declared (formats). It will use Sphinx (sphinx) or MkDocs (mkdocs) depending on the project.
- upload:
Once the build process finishes successfully, the resulting artifacts are uploaded to our servers. Our CDN is then purged so your docs are always up to date.
See also
If you require additional build steps or customization, it’s possible to run user-defined commands and customize the build process.
Cancelling builds
There may be situations where you want to cancel a running build. Cancelling builds allows your team to speed up review times and also help us reduce server costs and our environmental footprint.
A couple common reasons you might want to cancel builds are:
the build has an external dependency that hasn’t been updated
there were no changes on the documentation files
For these scenarios, Read the Docs supports three different mechanisms to cancel a running build:
- Manually:
Once a build was triggered, project administrators can go to the build detail page and click Cancel build.
- Automatically:
When Read the Docs detects a push to a version that is already building, it cancels the running build and starts a new build using the latest commit.
- Programatically:
You can use user-defined commands on
build.jobs
orbuild.commands
(see Build process customization) to check for your own cancellation condition and then return exit code183
to cancel a build. You can exit with the code0
to continue running the build.When this happens, Read the Docs will notify your Git platform (GitHub/GitLab) that the build succeeded (✅), so the pull request doesn’t have any failing checks.
Tip
Take a look at Cancel build based on a condition section for some examples.
Build resources
Every build has limited resources assigned to it. Generally, Read the Docs for Business users get double the build resources, with the option to increase that.
Our build limits are:
30 minutes build time
7GB of memory
Concurrent builds vary based on your pricing plan
If you are having trouble with your documentation builds, you can reach our support at support@readthedocs.com.
15 minutes build time
3GB of memory
2 concurrent builds
We can increase build limits on a per-project basis. Send an email to support@readthedocs.org providing a good reason why your documentation needs more resources.
If your business is hitting build limits hosting documentation on Read the Docs, please consider Read the Docs for Business which has much higher build resources.
Build process customization
Read the Docs has a well-defined build process that works for many projects. We also allow customization of builds in two ways:
- Extend the build process
Keep using the default build process, adding your own commands.
- Override the build process
This option gives you full control over your build. Read the Docs supports any tool that generates HTML.
Extend the build process
In the normal build process,
the pre-defined jobs checkout
, system_dependencies
, create_environment
, install
, build
and upload
are executed.
Read the Docs also exposes these jobs,
which allows you to customize the build process by adding shell commands.
The jobs where users can customize our default build process are:
Step |
Customizable jobs |
---|---|
Checkout |
|
System dependencies |
|
Create environment |
|
Install |
|
Build |
|
Upload |
No customizable jobs currently |
Note
The pre-defined jobs (checkout
, system_dependencies
, etc) cannot be overridden or skipped.
You can fully customize things in Override the build process.
These jobs are defined using the Configuration file reference with the build.jobs key. This example configuration defines commands to be executed before installing and after the build has finished:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.10"
jobs:
pre_install:
- bash ./scripts/pre_install.sh
post_build:
- curl -X POST \
-F "project=${READTHEDOCS_PROJECT}" \
-F "version=${READTHEDOCS_VERSION}" https://example.com/webhooks/readthedocs/
User-defined job limitations
The current working directory is at the root of your project’s cloned repository
Environment variables are expanded for each individual command (see Environment variable reference)
Each command is executed in a new shell process, so modifications done to the shell environment do not persist between commands
Any command returning non-zero exit code will cause the build to fail immediately (note there is a special exit code to cancel the build)
build.os
andbuild.tools
are required when usingbuild.jobs
build.jobs
examples
We’ve included some common examples where using build.jobs will be useful. These examples may require some adaptation for each projects’ use case, we recommend you use them as a starting point.
Unshallow git clone
Read the Docs does not perform a full clone in the checkout
job in order to reduce network data and speed up the build process.
Instead, it performs a shallow clone and only fetches the branch or tag that you are building documentation for.
Because of this, extensions that depend on the full Git history will fail.
To avoid this, it’s possible to unshallow the git clone:
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "3.10"
jobs:
post_checkout:
- git fetch --unshallow || true
If your build also relies on the contents of other branches, it may also be necessary to re-configure git to fetch these:
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "3.10"
jobs:
post_checkout:
- git fetch --unshallow || true
- git config remote.origin.fetch '+refs/heads/*:refs/remotes/origin/*' || true
- git fetch --all --tags || true
Cancel build based on a condition
When a command exits with code 183
,
Read the Docs will cancel the build immediately.
You can use this approach to cancel builds that you don’t want to complete based on some conditional logic.
Note
Why 183 was chosen for the exit code?
It’s the word “skip” encoded in ASCII. Then it’s taken the 256 modulo of it because the Unix implementation does this automatically for exit codes greater than 255.
>>> sum(list("skip".encode("ascii")))
439
>>> 439 % 256
183
Here is an example that cancels builds from pull requests when there are no changes to the docs/
folder compared to the origin/main
branch:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.12"
jobs:
post_checkout:
# Cancel building pull requests when there aren't changed in the docs directory or YAML file.
# You can add any other files or directories that you'd like here as well,
# like your docs requirements file, or other files that will change your docs build.
#
# If there are no changes (git diff exits with 0) we force the command to return with 183.
# This is a special exit code on Read the Docs that will cancel the build immediately.
- |
if [ "$READTHEDOCS_VERSION_TYPE" = "external" ] && git diff --quiet origin/main -- docs/ .readthedocs.yaml;
then
exit 183;
fi
This other example shows how to cancel a build if the commit message contains skip ci
on it:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.12"
jobs:
post_checkout:
# Use `git log` to check if the latest commit contains "skip ci",
# in that case exit the command with 183 to cancel the build
- (git --no-pager log --pretty="tformat:%s -- %b" -1 | grep -viq "skip ci") || exit 183
Generate documentation from annotated sources with Doxygen
It’s possible to run Doxygen as part of the build process to generate documentation from annotated sources:
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "3.10"
jobs:
pre_build:
# Note that this HTML won't be automatically uploaded,
# unless your documentation build includes it somehow.
- doxygen
Use MkDocs extensions with extra required steps
There are some MkDocs extensions that require specific commands to be run to generate extra pages before performing the build. For example, pydoc-markdown
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "3.10"
jobs:
pre_build:
- pydoc-markdown --build --site-dir "$READTHEDOCS_OUTPUT/html"
Avoid having a dirty Git index
Read the Docs needs to modify some files before performing the build to be able to integrate with some of its features. Because of this reason, it could happen the Git index gets dirty (it will detect modified files). In case this happens and the project is using any kind of extension that generates a version based on Git metadata (like setuptools_scm), this could cause an invalid version number to be generated. In that case, the Git index can be updated to ignore the files that Read the Docs has modified.
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "3.10"
jobs:
pre_install:
- git update-index --assume-unchanged environment.yml docs/conf.py
Perform a check for broken links
Sphinx comes with a linkcheck builder that checks for broken external links included in the project’s documentation. This helps ensure that all external links are still valid and readers aren’t linked to non-existent pages.
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "3.10"
jobs:
pre_build:
- python -m sphinx -b linkcheck -D linkcheck_timeout=1 docs/ $READTHEDOCS_OUTPUT/linkcheck
Support Git LFS (Large File Storage)
In case the repository contains large files that are tracked with Git LFS,
there are some extra steps required to be able to download their content.
It’s possible to use post_checkout
user-defined job for this.
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "3.10"
jobs:
post_checkout:
# Download and uncompress the binary
# https://git-lfs.github.com/
- wget https://github.com/git-lfs/git-lfs/releases/download/v3.1.4/git-lfs-linux-amd64-v3.1.4.tar.gz
- tar xvfz git-lfs-linux-amd64-v3.1.4.tar.gz
# Modify LFS config paths to point where git-lfs binary was downloaded
- git config filter.lfs.process "`pwd`/git-lfs filter-process"
- git config filter.lfs.smudge "`pwd`/git-lfs smudge -- %f"
- git config filter.lfs.clean "`pwd`/git-lfs clean -- %f"
# Make LFS available in current repository
- ./git-lfs install
# Download content from remote
- ./git-lfs fetch
# Make local files to have the real content on them
- ./git-lfs checkout
Install Node.js dependencies
It’s possible to install Node.js together with the required dependencies by using user-defined build jobs.
To setup it, you need to define the version of Node.js to use and install the dependencies by using build.jobs.post_install
:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.9"
nodejs: "16"
jobs:
post_install:
# Install dependencies defined in your ``package.json``
- npm ci
# Install any other extra dependencies to build the docs
- npm install -g jsdoc
Install dependencies with Poetry
Projects managed with Poetry,
can use the post_create_environment
user-defined job to use Poetry for installing Python dependencies.
Take a look at the following example:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.10"
jobs:
post_create_environment:
# Install poetry
# https://python-poetry.org/docs/#installing-manually
- pip install poetry
post_install:
# Install dependencies with 'docs' dependency group
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# VIRTUAL_ENV needs to be set manually for now.
# See https://github.com/readthedocs/readthedocs.org/pull/11152/
- VIRTUAL_ENV=$READTHEDOCS_VIRTUALENV_PATH poetry install --with docs
sphinx:
configuration: docs/conf.py
Update Conda version
Projects using Conda may need to install the latest available version of Conda.
This can be done by using the pre_create_environment
user-defined job to update Conda
before creating the environment.
Take a look at the following example:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "miniconda3-4.7"
jobs:
pre_create_environment:
- conda update --yes --quiet --name=base --channel=defaults conda
conda:
environment: environment.yml
Override the build process
Warning
This feature is in beta and could change without warning.
We are currently testing the new addons integrations we are building
on projects using build.commands
configuration key.
If your project requires full control of the build process, and extending the build process is not enough, all the commands executed during builds can be overridden using the build.commands.
As Read the Docs does not have control over the build process, you are responsible for running all the commands required to install requirements and build your project.
Where to put files
It is your responsibility to generate HTML and other formats of your documentation using build.commands.
The contents of the $READTHEDOCS_OUTPUT/<format>/
directory will be hosted as part of your documentation.
We store the the base folder name _readthedocs/
in the environment variable $READTHEDOCS_OUTPUT
and encourage that you use this to generate paths.
Supported formats are published if they exist in the following directories:
$READTHEDOCS_OUTPUT/html/
(required)$READTHEDOCS_OUTPUT/htmlzip/
$READTHEDOCS_OUTPUT/pdf/
$READTHEDOCS_OUTPUT/epub/
Note
Remember to create the folders before adding content to them. You can ensure that the output folder exists by adding the following command:
mkdir -p $READTHEDOCS_OUTPUT/html/
Search support
Read the Docs will automatically index the content of all your HTML files, respecting the search option.
You can access the search from the Read the Docs dashboard, or by using the Server side search API.
Note
In order for Read the Docs to index your HTML files correctly, they should follow the conventions described at Server side search integration.
build.commands
examples
This section contains examples that showcase what is possible with build.commands. Note that you may need to modify and adapt these examples depending on your needs.
Pelican
Pelican is a well-known static site generator that’s commonly used for blogs and landing pages. If you are building your project with Pelican you could use a configuration file similar to the following:
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.10"
commands:
- pip install pelican[markdown]
- pelican --settings docs/pelicanconf.py --output $READTHEDOCS_OUTPUT/html/ docs/
Docsify
Docsify generates documentation websites on the fly, without the need to build static HTML. These projects can be built using a configuration file like this:
version: 2
build:
os: "ubuntu-22.04"
tools:
nodejs: "16"
commands:
- mkdir --parents $READTHEDOCS_OUTPUT/html/
- cp --recursive docs/* $READTHEDOCS_OUTPUT/html/
Asciidoc
Asciidoctor is a fast processor for converting and generating documentation from AsciiDoc source. The Asciidoctor toolchain includes Asciidoctor.js which you can use with custom build commands. Here is an example configuration file:
version: 2
build:
os: "ubuntu-22.04"
tools:
nodejs: "20"
commands:
- npm install -g asciidoctor
- asciidoctor -D $READTHEDOCS_OUTPUT/html index.asciidoc
Git integration (GitHub, GitLab, Bitbucket)
Your Read the Docs account can be connected to your Git provider’s account. Connecting your account provides the following features:
- 🔑️ Easy login
Log in to Read the Docs with your GitHub, Bitbucket, or GitLab account.
- 🔁️ List your projects
Select a project to automatically import from all your Git repositories and organizations. See: Importing your documentation.
- ⚙️ Automatic configuration
Have your Git repository automatically configured with your Read the Docs webhook, which allows Read the Docs to build your docs on every change to your repository.
- 🚥️ Commit status
See your documentation build status as a commit status indicator on pull request builds.
Note
- Are you using GitHub Enterprise?
We offer customized enterprise plans for organizations. Please contact support@readthedocs.com.
- Other Git providers
We also generally support all Git providers through manual configuration.

All calls to the incoming webhook are logged. Each call can trigger builds and version synchronization.
Read the Docs incoming webhook
Accounts with GitHub, Bitbucket, and GitLab integration automatically have Read the Docs’ incoming webhook configured on all Git repositories that are imported. Other setups can setup the webhook through manual configuration.
When an incoming webhook notification is received, we ensure that it matches an existing Read the Docs project. Once we have validated the webhook, we take an action based on the information inside of the webhook.
Possible webhook actions outcomes are:
Builds the latest commit.
Synchronizes your versions based on the latest tag and branch data in Git.
Runs your automation rules.
Auto-cancels any currently running builds of the same version.
Other features enabled by Git integration
We have additional documentation around features provided by our Git integrations:
See also
- Pull request previews
Your Read the Docs project will automatically be configured to send back build notifications, which can be viewed as commit statuses and on pull requests.
- Single Sign-on with GitHub, Bitbucket, or GitLab
Git integration makes it possible for us to synchronize your Git repository’s access rights from your Git provider. That way, the same access rights are effective on Read the Docs and you don’t have to configure access in two places.
Pull request previews
Your project can be configured to build and host documentation for every new pull request. Previewing changes during review makes it easier to catch formatting and display issues before they go live.
Features
- Build on pull request events
We create and build a new version when a pull request is opened, and rebuild the version whenever a new commit is pushed.
- Build status report
Your project’s pull request build status will show as one of your pull request’s checks. This status will update as the build is running, and will show a success or failure status when the build completes.
GitHub build status reporting
- Warning banner
A warning banner is shown at the top of documentation pages to let readers know that this version isn’t the main version for the project.
Note
Warning banners are available only for Sphinx projects.
See also
- How to configure pull request builds
A guide to configuring pull request builds on Read the Docs.
Security
If pull request previews are enabled for your project,
anyone who can open a pull request on your repository will be able to trigger a build of your documentation.
For this reason, pull request previews are served from a different domain than your main documentation
(org.readthedocs.build
and com.readthedocs.build
).
Builds from pull requests have access to environment variables that are marked as Public only, if you have environment variables with private information, make sure they aren’t marked as Public. See Environment variables and build process for more information.
On Read the Docs for Business you can set pull request previews to be private or public, If you didn’t import your project manually and your repository is public, the privacy level of pull request previews will be set to Public. Public pull request previews are available to anyone with the link to the preview, while private previews are only available to users with access to the Read the Docs project.
Warning
If you set the privacy level of pull request previews to Private, make sure that only trusted users can open pull requests in your repository.
Setting pull request previews to private on a public repository can allow a malicious user to access read-only APIs using the user’s session that is reading the pull request preview. Similar to GHSA-pw32-ffxw-68rh.
Build failure notifications
Build notifications can alert you when your documentation builds fail so you can take immediate action. We offer the following methods for being notified:
- Email notifications:
Read the Docs allows you to configure build notifications via email. When builds fail, configured email addresses are notified.
- Build Status Webhooks:
Build notifications can happen via webhooks. This means that we are able to support a wide variety of services that receive notifications.
Slack and Discord are supported through ready-made templates.
Webhooks can be customized through your own template and a variety of variable substitutions.
Note
We don’t trigger email notifications or build status webhooks on builds from pull requests.
See also
- How to setup email notifications
Enable email notifications on failed builds, so you always know that your docs are deploying successfully.
- How to setup build status webhooks
Steps for setting up build notifications via webhooks, including examples for popular platforms like Slack and Discord.
Environment variable overview
Read the Docs allows you to define your own environment variables to be used in the build process. It also defines a set of default environment variables with information about your build. These are useful for different purposes:
Custom environment variables are useful for adding build secrets such as API tokens.
Default environment variables are useful for varying your build specifically for Read the Docs or specific types of builds on Read the Docs.
Custom environment variables are defined in the dashboard interface in . Environment variables are defined for a project’s entire build process, with 2 important exceptions.
Aside from storing secrets, there are other patterns that take advantage of environment variables, like reusing the same monorepo configuration in multiple documentation projects. In cases where the environment variable isn’t a secret, like a build tool flag, you should also be aware of the alternatives to environment variables.
See also
- How to use custom environment variables
A practical example of adding and accessing custom environment variables.
- Environment variable reference
Reference to all pre-defined environment variables for your build environments.
- Public API reference: Environment variables
Reference for managing custom environments via Read the Docs’ API.
Environment variables and build process
When a build process is started, pre-defined environment variables and custom environment variables are added at each step of the build process. The two sets of environment variables are merged together during the build process and are exposed to all of the executed commands, with pre-defined variables taking precedence over custom environment variables.
There are two noteworthy exceptions for custom environment variables:
- Build checkout step
Custom environment variables are not available during the checkout step of the build process
- Pull Request builds
Custom environment variables that are not marked as Public will not be available in pull request builds
Patterns of using environment variables
Aside from storing secrets, environment variables are also useful if you need to make either your .readthedocs.yaml or the commands called in the build process behave depending on pre-defined environment variables or your own custom environment variables.
Example: Multiple projects from the same Git repo
If you have the need to build multiple documentation websites from the same Git repository,
you can use an environment variable to configure the behavior of your build commands
or Sphinx conf.py
file.
An example of this is found in the documentation project that you are looking at now.
Using the Sphinx extension sphinx-multiproject,
the following configuration code decides whether to build the user or developer documentation.
This is defined by the PROJECT
environment variable:
from multiproject.utils import get_project
# (...)
multiproject_projects = {
"user": {
"use_config_file": False,
"config": {
"project": "Read the Docs user documentation",
},
},
"dev": {
"use_config_file": False,
"config": {
"project": "Read the Docs developer documentation",
},
},
}
docset = get_project(multiproject_projects)
Alternatives to environment variables
In some scenarios, it’s more feasible to define your build’s environment variables using the .readthedocs.yaml
configuration file.
Using the dashboard for administering environment variables may not be the right fit if you already know that you want to manage environment variables as code.
Consider the following scenario:
The environment variable is not a secret.
and
The environment variable is used just once for a custom command.
In this case, you can define the environment variable as code using Build process customization. The following example shows how a non-secret single-purpose environment variable can also be used.
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.12"
jobs:
post_build:
- EXAMPLE_ENVIRONMENT_VARIABLE=foobar command --flag
Environment variable reference
All build processes have the following environment variables automatically defined and available for each build step:
- READTHEDOCS
Whether the build is running inside Read the Docs.
- Default:
True
- READTHEDOCS_LANGUAGE
The locale name, or the identifier for the locale, for the project being built. This value comes from the project’s configured language.
- Example:
en
- Example:
it
- Example:
de_AT
- Example:
es
- Example:
pt_BR
- READTHEDOCS_VERSION
The slug of the version being built, such as
latest
,stable
, or a branch name likefeature-1234
. For pull request builds, the value will be the pull request number.
- READTHEDOCS_VERSION_NAME
The verbose name of the version being built, such as
latest
,stable
, or a branch name likefeature/1234
.
- READTHEDOCS_VERSION_TYPE
The type of the version being built.
- Example:
branch
- Example:
tag
- Example:
external
(for pull request builds)- Example:
unknown
- READTHEDOCS_VIRTUALENV_PATH
Path for the virtualenv that was created for this build. Only exists for builds using Virtualenv and not Conda.
- Example:
/home/docs/checkouts/readthedocs.org/user_builds/project/envs/version
- READTHEDOCS_OUTPUT
Base path for well-known output directories. Files in these directories will automatically be found, uploaded and published.
You need to concatenate an output format to this variable. Currently valid formats are
html
,pdf
,htmlzip
andepub
. (e.g.$READTHEDOCS_OUTPUT/html/
or$READTHEDOCS_OUTPUT/pdf/
) You also need to create the directory before moving outputs into the destination. You can create it with the following commandmkdir -p $READTHEDOCS_OUTPUT/html/
. Note that onlyhtml
supports multiple files, the other formats should have one and only one file to be uploaded.See also
- Where to put files
Information about using custom commands to generate output that will automatically be published once your build succeeds.
- READTHEDOCS_CANONICAL_URL
Canonical base URL for the version that is built. If the project has configured a custom domain (e.g.
docs.example.com
) it will be used in the resulting canonical URL. Otherwise, your project’s default subdomain will be used.The path for the language and version is appended to the domain, so the final canonical base URLs can look like the following examples:
- Example:
https://docs.example.com/en/latest/
- Example:
https://docs.readthedocs.io/ja/stable/
- Example:
https://example--17.org.readthedocs.build/fr/17/
- READTHEDOCS_GIT_CLONE_URL
URL for the remote source repository, from which the documentation is cloned. It could be HTTPS, SSH or any other URL scheme supported by Git. This is the same URL defined in your Project’s dashboard in .
- Example:
https://github.com/readthedocs/readthedocs.org
- Example:
git@github.com:readthedocs/readthedocs.org.git
- READTHEDOCS_GIT_IDENTIFIER
Contains the Git identifier that was checked out from the remote repository URL. Possible values are either a branch or tag name.
- Example:
v1.x
- Example:
bugfix/docs-typo
- Example:
feature/signup
- Example:
update-readme
- READTHEDOCS_GIT_COMMIT_HASH
Git commit hash identifier checked out from the repository URL.
- Example:
1f94e04b7f596c309b7efab4e7630ed78e85a1f1
See also
- Environment variable overview
General information about how environment variables are used in the build process.
- How to use custom environment variables
Learn how to define your own custom environment variables, in addition to the pre-defined ones.
Versions
Read the Docs supports multiple versions of your repository.
On initial import,
we will create a latest
version.
This will point at the default branch defined in your VCS control
(by default, main
on Git and default
in Mercurial).
If your project has any tags or branches with a name following semantic versioning,
we also create a stable
version, tracking your most recent release.
If you want a custom stable
version,
create either a tag or branch in your project with that name.
When you have Continuous Documentation Deployment configured for your repository, we will automatically build each version when you push a commit.
How we envision versions working
In the normal case,
the latest
version will always point to the most up to date development code.
If you develop on a branch that is different than the default for your VCS,
you should set the Default Branch to that branch.
You should push a tag for each version of your project.
These tags should be numbered in a way that is consistent with semantic versioning.
This will map to your stable
branch by default.
Note
We in fact are parsing your tag names against the rules given by
PEP 440. This spec allows “normal” version numbers like 1.4.2
as
well as pre-releases. An alpha version or a release candidate are examples
of pre-releases and they look like this: 2.0a1
.
We only consider non pre-releases for the stable
version of your
documentation.
If you have documentation changes on a long-lived branch, you can build those too. This will allow you to see how the new docs will be built in this branch of the code. Generally you won’t have more than 1 active branch over a long period of time. The main exception here would be release branches, which are branches that are maintained over time for a specific release number.
Version states
States define the visibility of a version across the site. You can change the states of a version from the Versions tab of your project.
Active
Active
Docs for this version are visible
Builds can be triggered for this version
Inactive
Docs for this version aren’t visible
Builds can’t be triggered for this version
When you deactivate a version, its docs are removed.
Privacy levels
Note
Privacy levels are only supported on Business hosting.
Public
It means that everything is available to be seen by everyone.
Private
Private versions are available only to people who have permissions to see them. They will not display on any list view, and will 404 when you link them to others. If you want to share your docs temporarily, see Sharing private documentation.
In addition, if you want other users to view the build page of your public versions, you’ll need to the set the privacy level of your project to public.
Logging out
When you log in to a documentation site, you will be logged in until close your browser. To log out, click on the Log out link in your documentation’s flyout menu. This is usually located in the bottom right or bottom left, depending on the theme design. This will log you out from the current domain, but not end any other session that you have active.

Version warning
A banner can be automatically displayed to notify viewers that there may be a more stable version of the documentation available. Specifically:
When the
latest
version is being shown, and there’s also astable
version active and not hidden, then the banner will remind the viewer that some of the documented features may not yet be available, and suggest that the viewer switch to thestable
version.When a version is being shown that is not the
stable
version, and there’s astable
version available, then the banner will suggest that the viewer switch to thestable
version to see the newest documentation.
This feature is enabled by default on projects using the new beta addons.
The beta addons can be enabled by using build.commands
config key or via the new beta dashboard (https://beta.readthedocs.org) going to the admin section of your docs (Admin > Settings)
Note
An older version of this feature is currently only available to projects that have already enabled it. When the updated feature development is finished the toggle setting will be enabled for all projects.
Redirects on root URLs
When a user hits the root URL for your documentation,
for example https://pip.readthedocs.io/
,
they will be redirected to the Default version.
This defaults to latest,
but could also point to your latest released version.
Subprojects
In this article, you can learn more about how several documentation projects can be combined and presented to the reader on the same website.
Read the Docs can be configured to make other projects available on the website of the main project as subprojects. This allows for documentation projects to share a search index and a namespace or custom domain, but still be maintained independently.
This is useful for:
Organizations that need all their projects visible in one documentation portal or landing page
Projects that document and release several packages or extensions
Organizations or projects that want to have a common search function for several sets of documentation
For a main project example-project
, a subproject example-project-plugin
can be made available as follows:
Main project: https://example-project.readthedocs.io/en/latest/
Subproject: https://example-project.readthedocs.io/projects/plugin/en/latest/
See also
- How to manage subprojects
Learn how to create and manage subprojects
- How to link to other documentation projects with Intersphinx
Learn how to use references between different Sphinx projects, for instance between subprojects
Using aliases
Adding an alias for the subproject allows you to override the URL that is used to access it, giving more control over how you want to structure your projects. You can choose an alias for the subproject when it is created.
You can set your subproject’s project name and slug however you want, but we suggest prefixing it with the name of the main project.
Typically, a subproject is created with a <mainproject>-
prefix,
for instance if the main project is called example-project
and the subproject is called plugin
,
then the subproject’s Read the Docs project slug will be example-project-plugin
.
When adding the subproject,
the alias is set to plugin
and the project’s URL becomes
example-project.readthedocs.io/projects/plugin
.
When you add a subproject,
the subproject will not be directly available anymore from its own domain.
For instance, example-project-plugin.readthedocs.io/
will redirect to example-project.readthedocs.io/projects/plugin
.
Custom domain on subprojects
Adding a custom domain to a subproject is not allowed, since your documentation will always be served from the domain of the parent project.
Separate release cycles
By using subprojects, you can present the documentation of several projects even though they have separate release cycles.
Your main project may have its own versions and releases, while all of its subprojects maintain their own individual versions and releases. We recommend that documentation follows the release cycle of whatever it is documenting, meaning that your subprojects should be free to follow their own release cycle.
This is solved by having an individual flyout menu active for the project that’s viewed. When the user navigates to a subproject, they are presented with a flyout menu matching the subproject’s versions and Offline formats (PDF, ePub, HTML).
Search
Search on the parent project will include results from its subprojects.
If you search on the v1
version of the parent project,
results from the v1
version of its subprojects will be included,
or from the default version for subprojects that don’t have a v1
version.
This is currently the only way to share search results between projects, we do not yet support sharing search results between sibling subprojects or arbitrary projects.
Localization and Internationalization
In this article, we explain high-level approaches to internationalizing and localizing your documentation.
By default, Read the Docs assumes that your documentation is or might become multilingual one day.
The initial default language is English and
therefore you often see the initial build of your documentation published at /en/latest/
,
where the /en
denotes that it’s in English.
By having the en
URL component present from the beginning,
you are ready for the eventuality that you would want a second language.
Read the Docs supports hosting your documentation in multiple languages. Read below for the various approaches that we support.
Projects with one language
If your documentation isn’t in English (the default), you should indicate which language you have written it in.
It is easy to set the Language of your project. On the project Admin page (or Import page), simply select your desired Language from the dropdown. This will tell Read the Docs that your project is in the language. The language will be represented in the URL for your project.
For example,
a project that is in Spanish will have a default URL of /es/latest/
instead of /en/latest/
.
Projects with multiple translations (Sphinx-only)
See also
- How to manage translations for Sphinx projects
Describes the whole process for a documentation with multiples languages in the same repository and how to keep the translations updated on time.
This situation is a bit more complicated.
To support this,
you will have one parent project and a number of projects marked as translations of that parent.
Let’s use phpmyadmin
as an example.
The main phpmyadmin
project is the parent for all translations.
Then you must create a project for each translation,
for example phpmyadmin-spanish
.
You will set the Language for phpmyadmin-spanish
to Spanish
.
In the parent projects Translations page,
you will say that phpmyadmin-spanish
is a translation for your project.
This has the results of serving:
phpmyadmin
athttp://phpmyadmin.readthedocs.io/en/latest/
phpmyadmin-spanish
athttp://phpmyadmin.readthedocs.io/es/latest/
It also gets included in the Read the Docs flyout menu:

Note
The default language of a custom domain is determined by the language of the parent project that the domain was configured on. See Custom domains for more information.
Note
You can include multiple translations in the same repository,
with same conf.py
and .rst
files,
but each project must specify the language to build for those docs.
Note
You must commit the .po
files for Read the Docs to translate your documentation.
Translation workflows
When you work with translations, the workflow of your translators becomes a critical component.
Considerations include:
Are your translators able to use a git workflow? For instance, are they able to translate directly via GitHub?
Do you benefit from machine translation?
Do you need different roles, for instance do you need translators and editors?
What is your source language?
When are your translated versions published?
By using Sphinx and .po files, you will be able to automatically synchronize between your documentation source messages on your git platform and your translation platform.
There are many translation platforms that support this workflow. These include:
Because Read the Docs builds your git repository, you can use any of the above solutions. Any solution that synchronizes your translations with your git repository will ensure that your translations are automatically published with Read the Docs.
URL versioning schemes
The versioning scheme of your project defines the URL of your documentation, and if your project supports multiple versions or translations.
Read the Docs supports three different versioning schemes:
See also
- How to change the versioning scheme of your project
How to configure your project to use a specific versioning scheme.
- Versions
General explanation of how versioning works on Read the Docs.
Multiple versions with translations
This is the default versioning scheme, it’s the recommend one if your project has multiple versions, and has or plans to support translations.
The URLs of your documentation will look like:
/en/latest/
/en/1.5/
/es/latest/install.html
/es/1.5/contributing.html
Multiple versions without translations
Use this versioning scheme if you want to have multiple versions of your documentation, but don’t want to have translations.
The URLs of your documentation will look like:
/latest/
/1.5/install.html
Warning
This means you can’t have translations for your documentation.
Single version without translations
Having a single version of a documentation project can be considered the better choice in cases where there should only always exist one unambiguous copy of your project. For example:
A research project may wish to only expose readers to their latest list of publications and research data.
A SaaS application might only ever have one version live.
The URLs of your documentation will look like:
/
/install.html
Warning
This means you can’t have translations or multiple versions for your documentation.
Custom domains
By configuring a custom domain for your project,
your project can serve documentation from a domain you control,
for instance docs.example.com
.
This is great for maintaining a consistent brand for your product and its documentation.
Default subdomains
Without a custom domain configured, your project’s documentation is served from a Read the Docs domain using a unique subdomain for your project:
<project name>.readthedocs.io
for Read the Docs Community.<organization name>-<project name>.readthedocs-hosted.com
for Read the Docs for Business. The addition of the organization name allows multiple organizations to have projects with the same name.
See also
- How to manage custom domains
How to create and manage custom domains for your project.
Features
- Automatic SSL
SSL certificates are automatically issued through Cloudflare for every custom domain. No extra set up is required beyond configuring your project’s custom domain.
- CDN caching
Response caching is provided through a CDN for all documentation projects, including projects using a custom domain. CDN caching improves page response time for your documentation’s users, and the CDN edge network provides low latency response times regardless of location.
- Multiple domains
Projects can be configured to be served from multiple domains, which always includes the project’s default subdomain. Only one domain can be configured as the canonical domain however, and any requests to non-canonical domains and subdomains will redirect to the canonical domain.
- Canonical domains
The canonical domain configures the primary domain the documentation will serve from, and also sets the domain search engines use for search results when hosting from multiple domains. Projects can only have one canonical domain, which is the project’s default subdomain if no other canonical domain is defined.
See also
- Canonical URLs
How canonical domains affect your project’s canonical URL, and why canonical URLs are important.
- Subprojects
How to share a custom domain between multiple projects.
Canonical URLs
A canonical URL allows you to specify the preferred version of a web page to prevent duplicated content. Here are some examples of when a canonical URL is used:
Search engines use your canonical URL to link users to the correct version and domain of your documentation.
Many popular chat clients and social media networks generate link previews, using your canonical URL as the final destination.
If canonical URLs aren’t used, it’s easy for outdated documentation to be the top search result for various pages in your documentation. This is not a perfect solution for this problem, but generally people finding outdated documentation is a big problem, and this is one of the suggested ways to solve it from search engines.
Tip
In most cases, Read the Docs will automatically generate a canonical URL for Sphinx projects. Most Sphinx users do not need to take further action.
See also
- How to enable canonical URLs
More information on how to enable canonical URLs in your project.
How Read the Docs generates canonical URLs
The canonical URL takes the following into account:
The default version of your project (usually “latest” or “stable”).
The canonical custom domain if you have one, otherwise the default subdomain will be used.
For example, if you have a project named example-docs
with a custom domain https://docs.example.com
,
then your documentation will be served at https://example-docs.readthedocs.io
and https://docs.example.com
.
Without specifying a canonical URL, a search engine like Google will index both domains.
You’ll want to use https://docs.example.com
as your canonical domain.
This means that when Google indexes a page like https://example-docs.readthedocs.io/en/latest/
,
it will know that it should really point at https://docs.example.com/en/latest/
,
thus avoiding duplicating the content.
Note
If you want your custom domain to be set as the canonical, you need to set Canonical: This domain is the primary one where the documentation is served from
in the Admin > Domains section of your project settings.
Implementation
A canonical URL is automatically specified in the HTML output with a <link>
element.
For instance, regardless of whether you are viewing this page on /en/latest
or /en/stable
,
the following HTML header data will be present:
<link rel="canonical" href="https://docs.readthedocs.io/en/stable/canonical-urls.html" />
Content Delivery Network (CDN) and caching
A CDN is used for making documentation pages fast for your users. CDNs increase speed by caching documentation content in multiple data centers around the world, and then serving docs from the data center closest to the user.
We support CDNs on both of our sites:
- On Read the Docs Community,
we are able to provide a CDN to all the projects that we host. This service is graciously sponsored by Cloudflare.
- On Read the Docs for Business,
the CDN is included as part of our all of our plans. We use Cloudflare for this as well.
CDN benefits
Having a CDN in front of your documentation has many benefits:
Improved reliability: Since docs are served from multiple places, one can go down and the docs are still accessible.
Improved performance: Data takes time to travel across space, so connecting to a server closer to the user makes documentation load faster.
Automatic cache refresh
We automatically refresh the cache on the CDN when the following actions happen:
Your project is saved.
Your domain is saved.
A new version of your documentation is built.
By refreshing the cache according to these rules, readers should never see outdated content. This makes the end-user experience seamless, and fast.
sitemap.xml
support
Sitemaps allow you to inform search engines about URLs that are available for crawling. This makes your content more discoverable, and improves your Search Engine Optimization (SEO).
How it works
The sitemap.xml
file is read by search engines in order to index your documentation.
It contains information such as:
When a URL was last updated.
How often that URL changes.
How important this URL is in relation to other URLs in the site.
What translations are available for a page.
Read the Docs automatically generates a sitemap.xml
for your project,
By default the sitemap includes:
Each version of your documentation and when it was last updated, sorted by version number.
This allows search engines to prioritize results based on the version number, sorted by semantic versioning.
Custom sitemap.xml
You can control the sitemap that is used via the robots.txt
file.
Our robots.txt support allows you to host a custom version of this file.
An example would look like:
User-agent: *
Allow: /
Sitemap: https://docs.example.com/en/stable/sitemap.xml
404 Not Found
pages
If you want your project to use a custom or branded 404 Not Found
page,
you can put a 404.html
or 404/index.html
at the top level of your project’s HTML output.
How it works
When our servers return a 404 Not Found
error,
we check if there is a 404.html
or 404/index.html
in the root of your project’s output.
The following locations are checked, in order:
/404.html
or404/index.html
in the current documentation version./404.html
or404/index.html
in the default documentation version.
Tool integration
Documentation tools will have different ways of generating a 404.html
or 404/index.html
file.
We have examples for some of the most popular tools below.
We recommend the sphinx-notfound-page extension,
which Read the Docs maintains.
It automatically creates a 404.html
page for your documentation,
matching the theme of your project.
See its documentation for how to install and customize it.
If you want to create a custom 404.html
,
Sphinx uses html_extra_path option to add static files to the output.
You need to create a 404.html
file and put it under the path defined in html_extra_path
.
If you are using the DirHTML
builder,
no further steps are required.
Sphinx will automatically apply the <page-name>/index.html
folder structure to your 404 page:
404/index.html
.
Read the Docs also detects 404 pages named this way.
MkDocs automatically generates a 404.html
which Read the Docs will use.
However, assets will not be loaded correctly unless you define the site_url configuration value as your site’s
canonical base URL.
robots.txt
support
The robots.txt files allow you to customize how your documentation is indexed in search engines. It’s useful for:
Hiding various pages from search engines
Disabling certain web crawlers from accessing your documentation
Disallowing any indexing of your documentation
Read the Docs automatically generates one for you with a configuration that works for most projects.
By default, the automatically created robots.txt
:
Hides versions which are set to Hidden from being indexed.
Allows indexing of all other versions.
Warning
robots.txt
files are respected by most search engines,
but they aren’t a guarantee that your pages will not be indexed.
Search engines may choose to ignore your robots.txt
file,
and index your docs anyway.
If you require private documentation, please see Sharing private documentation.
How it works
You can customize this file to add more rules to it.
The robots.txt
file will be served from the default version of your project.
This is because the robots.txt
file is served at the top-level of your domain,
so we must choose a version to find the file in.
The default version is the best place to look for it.
Tool integration
Documentation tools will have different ways of generating a robots.txt
file.
We have examples for some of the most popular tools below.
Sphinx uses the html_extra_path configuration value to add static files to its final HTML output.
You need to create a robots.txt
file and put it under the path defined in html_extra_path
.
MkDocs needs the robots.txt
to be at the directory defined by the docs_dir configuration value.
Offline formats (PDF, ePub, HTML)
This page will provide an overview of a core Read the Docs feature: building docs in multiple formats.
Read the Docs supports the following formats by default:
PDF
ePub
Zipped HTML
This means that every commit that you push will automatically update your offline formats as well as your documentation website.
Use cases
This functionality is great for anyone who needs documentation when they aren’t connected to the internet. Users who are about to get on a plane can grab a single file and have the entire documentation during their trip. Many academic and scientific projects benefit from these additional formats.
PDF versions are also helpful to automatically create printable versions of your documentation. The source of your documentation will be structured to support both online and offline formats. This means that a documentation project displayed as a website can be downloaded as a PDF, ready to be printed as a report or a book.
Offline formats also support having the entire documentation in a single file. Your entire documentation can now be delivered as an email attachment, uploaded to an eReader, or accessed and searched locally without online latency. This makes your documentation project easy to redistribute or archive.
Accessing offline formats
You can download offline formats in the Project dashboard > Downloads:

When you are browsing a documentation project, they can also be accessed directly from the Flyout menu.
Examples
If you want to see an example, you can download the Read the Docs documentation in the following formats:
Continue learning
Downloadable documentation formats are built by your documentation framework. They are then published by Read the Docs and included in your Flyout menu. Therefore, it’s your framework that decides exactly how each output is built and which formats are supported:
- Sphinx
All output formats are built mostly lossless from the documentation source, meaning that your documentation source (reStructuredText or Markdown/MyST) is built from scratch for each output format.
- MkDocs and Docsify + more
The common case for most documentation frameworks is that several alternative extensions exist supporting various output formats. Most of the extensions export the HTML outputs as another format (for instance PDF) through a conversion process.
Because Sphinx supports the generation of offline formats through an official process, we are also able to support it officially. Other alternatives can also work, provided that you identify which extension you want to use and configure the environment for it to run. Other formats aren’t natively supported by Read the Docs, but support is coming soon.
See also
Other pages in our documentation are relevant to this feature, and might be a useful next step.
How to enable offline formats - Guide to enabling and disabling this feature.
formats - Configuration file options for offline formats.
How to embed content from your documentation
Read the Docs allows you to embed content from any of the projects we host and specific allowed external domains
(currently, docs.python.org
, docs.scipy.org
, docs.sympy.org
, numpy.org
)
This allows reuse of content across sites, making sure the content is always up to date.
There are a number of use cases for embedding content, so we’ve built our integration in a way that enables users to build on top of it. This guide will show you some of our favorite integrations:
Contextualized tooltips on documentation pages
Tooltips on your own documentation are really useful to add more context to the current page the user is reading. You can embed any content that is available via reference in Sphinx, including:
Python object references
Full documentation pages
Sphinx references
Term definitions
We built a Sphinx extension called sphinx-hoverxref
on top of our Embed API
you can install in your project with minimal configuration.
Here is an example showing a tooltip when you hover with the mouse a reference:

Tooltip shown when hovering on a reference using sphinx-hoverxref
.
You can find more information about this extension, how to install and configure it in the hoverxref documentation.
Inline help on application website
This allows us to keep the official documentation as the single source of truth, while having great inline help in our application website as well. On the “Automation Rules” admin page we could embed the content of our Automation rules documentation page and be sure it will be always up to date.
Note
We recommend you point at tagged releases instead of latest. Tags don’t change over time, so you don’t have to worry about the content you are embedding disappearing.
The following example will fetch the section “Creating an automation rule” in page automation-rules.html
from our own docs and will populate the content of it into the #help-container
div element.
<script type="text/javascript">
var params = {
'url': 'https://docs.readthedocs.io/en/latest/automation-rules.html%23creating-an-automation-rule',
// 'doctool': 'sphinx',
// 'doctoolversion': '4.2.0',
};
var url = 'https://readthedocs.org/api/v3/embed/?' + $.param(params);
$.get(url, function(data) {
$('#help-container').content(data['content']);
});
</script>
<div id="help-container"></div>
You can modify this example to subscribe to .onclick
Javascript event,
and show a modal when the user clicks in a “Help” link.
Tip
Take into account that if the title changes, your section
argument will break.
To avoid that, you can manually define Sphinx references above the sections you don’t want to break.
For example,
.. in your .rst document file
.. _unbreakable-section-reference:
Creating an automation rule
---------------------------
This is the text of the section.
.. in your .md document file
(unbreakable-section-reference)=
## Creating an automation rule
This is the text of the section.
To link to the section “Creating an automation rule” you can send section=unbreakable-section-reference
.
If you change the title it won’t break the embedded content because the label for that title will still be unbreakable-section-reference
.
Please, take a look at the Sphinx :ref:
role documentation for more information about how to create references.
Calling the Embed API directly
Embed API lives under https://readthedocs.org/api/v3/embed/
URL and accept the URL of the content you want to embed.
Take a look at its own documentation to find out more details.
You can click on the following links and check a live response directly in the browser as examples:
Note
All relative links to pages contained in the remote content will continue to point at the remote page.
Server side search
Read the Docs provides full-text search across all of the pages of all projects, this is powered by Elasticsearch. You can search all projects at https://readthedocs.org/search/, or search only on your project from the Search tab of your project.
See also
- Search query syntax
Syntax options for searching Read the Docs projects
- Server side search API
Reference to the Server Side Search API
Search features
Read the Docs has the following search features:
- Search across subprojects
Subprojects allow you to host multiple discrete projects on a single domain. Every subproject hosted on that same domain is included in the search results of the main project.
- Search results land on the exact content you were looking for
We index every heading in the document, allowing you to get search results exactly to the content that you are searching for. Try this out by searching for “full-text search”.
- Full control over which results should be listed first
Set a custom rank per page, allowing you to deprecate content, and always show relevant content to your users first. See search.ranking.
- Search across projects you have access to
Search across all the projects you access to in your Dashboard. Don’t remember where you found that document the other day? No problem, you can search across them all.
You can also specify what projects you want to search using the
project:{name}
syntax, for example: “project:docs project:dev search”. See Search query syntax.- Special query syntax for more specific results
We support a full range of search queries. You can see some examples at Special queries.
- Configurable
Tweak search results according to your needs using a configuration file.
- Ready to use
We override the default search engine of your Sphinx project with ours to provide you with all these benefits within your project. We fallback to the built-in search engine from your project if ours doesn’t return any results, just in case we missed something 😄.
- API
Integrate our search as you like. See Server side search API.
- Analytics
Know what your users are searching for. See How to use search analytics

Search analytics demo. Read more in How to use search analytics.
Search as you type
readthedocs-sphinx-search is a Sphinx extension that integrates your documentation more closely with the search implementation of Read the Docs. It adds a clean and minimal full-page search UI that supports a search as you type feature.
To try this feature, you can press / (forward slash) and start typing or just visit these URLs:
Search query syntax
When searching on Read the Docs with server side search, you can use some parameters in your query in order to search on given projects, versions, or to get more accurate results.
Parameters
Parameters are in the form of name:value
,
they can appear anywhere in the query,
and depending on the parameter, you can use one or more of them.
Any other text that isn’t a parameter will be part of the search query.
If you don’t want your search term to be interpreted as a parameter,
you can escape it like project\:docs
.
Note
Unknown parameters like foo:bar
don’t require escaping
The available parameters are:
- project
Indicates the project and version to includes results from (this doesn’t include subprojects or translations). If the version isn’t provided, the default version will be used. More than one parameter can be included.
Examples:
project:docs test
project:docs/latest test
project:docs/stable project:dev test
- subprojects
Include results from the given project and its subprojects. If the version isn’t provided, the default version of all projects will be used, if a version is provided, all subprojects matching that version will be included, and if they don’t have a version with that name, we use their default version. More than one parameter can be included.
Examples:
subprojects:docs test
subprojects:docs/latest test
subprojects:docs/stable subprojects:dev test
- user
Include results from projects the given user has access to. The only supported value is
@me
, which is an alias for the current user. Only one parameter can be included, if duplicated, the last one will override the previous one.Examples:
user:@me test
Permissions
If the user doesn’t have permission over one version, or if the version doesn’t exist, we don’t include results from that version.
The API will return all the projects that were used in the final search, with that information you can check which projects were used in the search.
Limitations
In order to keep our search usable for everyone, you can search up to 100 projects at a time. If the resulting query includes more than 100 projects, they will be omitted from the final search.
This syntax is only available when using our search API V3 or when using the global search (https://readthedocs.org/search/).
Searching multiple versions of the same project isn’t supported, the last version will override the previous one.
Special queries
Read the Docs uses the Simple Query String feature from Elasticsearch. This means that as the search query becomes more complex, the results yielded become more specific.
Exact phrase search
If a query is wrapped in "
(double quotes),
then only those results where the phrase is exactly matched will be returned.
Examples:
"custom css"
"adding a subproject"
"when a 404 is returned"
Prefix query
*
(asterisk) at the end of any term signifies a prefix query.
It returns the results containing the words with specific prefix.
Examples:
test*
build*
Fuzziness
~N
(tilde followed by a number) after a word indicates edit distance (fuzziness).
This type of query is helpful when the exact spelling of the keyword is unknown.
It returns results that contain terms similar to the search term.
Examples:
doks~1
test~2
getter~2
Words close to each other
~N
(tilde followed by a number) after a phrase can be used to match words that are near to each other.
Examples:
"dashboard admin"~2
"single documentation"~1
"read the docs policy"~5
Redirects
Over time, a documentation project may want to rename and move contents around. Redirects allow changes in a documentation project to happen without bad user experiences.
If you do not manage URL structures, users will eventually encounter 404 File Not Found errors. While this may be acceptable in some cases, the bad user experience of a 404 page is usually best to avoid.
- Built-in redirects ⬇️
Allows for simple and long-term sharing of external references to your documentation.
- User-defined redirects ⬇️
Makes it easier to move contents around
See also
- How to use custom URL redirects in documentation projects
This guide shows you how to add redirects with practical examples.
- Best practices for linking to your documentation
Information and tips about creating and handling external references.
- How to deprecate content
A guide to deprecating features and other topics in a documentation.
Built-in redirects
This section explains the redirects that are automatically active for all projects and how they are useful. Built-in redirects are especially useful for creating and sharing incoming links, which is discussed indepth in Best practices for linking to your documentation.
Page redirects at /page/
You can link to a specific page and have it redirect to your default version,
allowing you to create links on external sources that are always up to date.
This is done with the /page/
URL prefix.
For instance, you can reach the page you are reading now by going to https://docs.readthedocs.io/page/guides/best-practice/links.html.
Another way to handle this is the latest
version.
You can set your latest
version to a specific version and just always link to latest
.
You can reach this page by going to https://docs.readthedocs.io/en/latest/guides/best-practice/links.html.
Root URL redirect at /
A link to the root of your documentation (<slug>.readthedocs.io/
) will redirect to the default version,
as set in your project settings.
This works for both readthedocs.io (Read the Docs Community), readthedocs-hosted.com (Read the Docs for Business), and custom domains.
For example:
docs.readthedocs.io -> docs.readthedocs.io/en/stable/
Warning
You cannot use the root redirect to reference specific pages.
/
only redirects to the default version,
whereas /some/page.html
will not redirect to /en/latest/some/page.html
.
Instead, use Page redirects at /page/.
You can choose which is the default version for Read the Docs to display. This usually corresponds to the most recent official release from your project.
Root language redirect at /<lang>/
A link to the root language of your documentation (<slug>.readthedocs.io/en/
)
will redirect to the default version of that language.
For example, accessing the English language of the project will redirect you to the its default version (stable
):
https://docs.readthedocs.io/en/ -> https://docs.readthedocs.io/en/stable/
Shortlink with https://<slug>.rtfd.io
Links to rtfd.io
are treated the same way as readthedocs.io
.
They are intended to be easy and short for people to type.
You can reach these docs at https://docs.rtfd.io.
User-defined redirects
Page redirects
Page Redirects let you redirect a page across all versions of your documentation.
Note
Since pages redirects apply to all versions,
From URL
doesn’t need to include the /<language>/<version>
prefix (e.g. /en/latest
),
but just the version-specific part of the URL.
If you want to set redirects only for some languages or some versions, you should use
Exact redirects with the fully-specified path.
Exact redirects
Exact Redirects take into account the full URL (including language and version), allowing you to create a redirect for a specific version or language of your documentation.
Clean/HTML URLs redirects
If you decide to change the style of the URLs of your documentation, you can use Clean URL to HTML or HTML to clean URL redirects to redirect users to the new URL style.
For example, if a previous page was at /en/latest/install.html
,
and now is served at /en/latest/install/
, or vice versa,
users will be redirected to the new URL.
Limitations and observations
Read the Docs Community users are limited to 100 redirects per project, and Read the Docs for Business users have a number of redirects limited by their plan.
By default, redirects only apply on pages that don’t exist. Forced redirects allow you to apply redirects on existing pages.
Redirects aren’t applied on previews of pull requests. You should treat these domains as ephemeral and not rely on them for user-facing content.
You can redirect to URLs outside Read the Docs, just include the protocol in
To URL
, e.ghttps://example.com
.A wildcard can be used at the end of
From URL
(suffix wildcard) to redirect all pages matching a prefix. Prefix and infix wildcards are not supported.If a wildcard is used in
From URL
, the part of the URL that matches the wildcard can be used inTo URL
with the:splat
placeholder.Redirects without a wildcard match paths with or without a trailing slash, e.g.
/install
matches/install
and/install/
.The order of redirects matters. If multiple redirects match the same URL, the first one will be applied. The order of redirects can be changed from your project’s dashboard.
If an infinite redirect is detected, a 404 error will be returned, and no other redirects will be applied.
Examples
Redirecting a page
Say you move the example.html
page into a subdirectory of examples: examples/intro.html
.
You can create a redirect with the following configuration:
Type: Page Redirect
From URL: /example.html
To URL: /examples/intro.html
Users will now be redirected:
From
https://docs.example.com/en/latest/example.html
tohttps://docs.example.com/en/latest/examples/intro.html
.From
https://docs.example.com/en/stable/example.html
tohttps://docs.example.com/en/stable/examples/intro.html
.
If you want this redirect to apply to a specific version of your documentation, you can create a redirect with the following configuration:
Type: Exact Redirect
From URL: /en/latest/example.html
To URL: /en/latest/examples/intro.html
Note
Use the desired version and language instead of latest
and en
.
Redirecting a directory
Say you rename the /api/
directory to /api/v1/
.
Instead of creating a redirect for each page in the directory,
you can use a wildcard to redirect all pages in that directory:
Type: Page Redirect
From URL: /api/*
To URL: /api/v1/:splat
Users will now be redirected:
From
https://docs.example.com/en/latest/api/
tohttps://docs.example.com/en/latest/api/v1/
.From
https://docs.example.com/en/latest/api/projects.html
tohttps://docs.example.com/en/latest/api/v1/projects.html
.
If you want this redirect to apply to a specific version of your documentation, you can create a redirect with the following configuration:
Type: Exact Redirect
From URL: /en/latest/api/*
To URL: /en/latest/api/v1/:splat
Note
Use the desired version and language instead of latest
and en
.
Redirecting a directory to a single page
Say you put the contents of the /examples/
directory into a single page at /examples.html
.
You can use a wildcard to redirect all pages in that directory to the new page:
Type: Page Redirect
From URL: /examples/*
To URL: /examples.html
Users will now be redirected:
From
https://docs.example.com/en/latest/examples/
tohttps://docs.example.com/en/latest/examples.html
.From
https://docs.example.com/en/latest/examples/intro.html
tohttps://docs.example.com/en/latest/examples.html
.
If you want this redirect to apply to a specific version of your documentation, you can create a redirect with the following configuration:
Type: Exact Redirect
From URL: /en/latest/examples/*
To URL: /en/latest/examples.html
Note
Use the desired version and language instead of latest
and en
.
Redirecting a page to the latest version
Say you want your users to always be redirected to the latest version of a page,
your security policy (/security.html
) for example.
You can use a wildcard with a forced redirect to redirect all versions of that page to the latest version:
Type: Page Redirect
From URL: /security.html
To URL: https://docs.example.com/en/latest/security.html
Force Redirect: True
Users will now be redirected:
From
https://docs.example.com/en/v1.0/security.html
tohttps://docs.example.com/en/latest/security.html
.From
https://docs.example.com/en/v2.5/security.html
tohttps://docs.example.com/en/latest/security.html
.
Note
To URL
includes the domain, this is required,
otherwise the redirect will be relative to the current version,
resulting in a redirect to https://docs.example.com/en/v1.0/en/latest/security.html
.
Redirecting an old version to a new one
Let’s say that you want to redirect your readers of your version 2.0
of your documentation under /en/2.0/
because it’s deprecated,
to the newest 3.0
version of it at /en/3.0/
.
You can use an exact redirect to do so:
Type: Exact Redirect
From URL: /en/2.0/*
To URL: /en/3.0/:splat
Users will now be redirected:
From
https://docs.example.com/en/2.0/dev/install.html
tohttps://docs.example.com/en/3.0/dev/install.html
.
Note
For this redirect to work, your old version must be disabled,
if the version is still active, you can use the Force Redirect
option.
Creating a shortlink
Say you want to redirect https://docs.example.com/security
to https://docs.example.com/en/latest/security.html
,
so it’s easier to share the link to the page.
You can create a redirect with the following configuration:
Type: Exact Redirect
From URL: /security
To URL: /en/latest/security.html
Users will now be redirected:
From
https://docs.example.com/security
(no trailing slash) tohttps://docs.example.com/en/latest/security.html
.From
https://docs.example.com/security/
(trailing slash) tohttps://docs.example.com/en/latest/security.html
.
Migrating your docs to Read the Docs
Say that you previously had your docs hosted at https://docs.example.com/dev/
,
and choose to migrate to Read the Docs with support for multiple versions and translations.
Your documentation will now be served at https://docs.example.com/en/latest/
,
but your users may have bookmarks saved with the old URL structure, for example https://docs.example.com/dev/install.html
.
You can use an exact redirect with a wildcard to redirect all pages from the old URL structure to the new one:
Type: Exact Redirect
From URL: /dev/*
To URL: /en/latest/:splat
Users will now be redirected:
From
https://docs.example.com/dev/install.html
tohttps://docs.example.com/en/latest/install.html
.
Migrating your documentation to another domain
You can use an exact redirect with the force option to migrate your documentation to another domain, for example:
Type: Exact Redirect
From URL: /*
To URL: https://newdocs.example.com/:splat
Force Redirect: True
Users will now be redirected:
From
https://docs.example.com/en/latest/install.html
tohttps://newdocs.example.com/en/latest/install.html
.
Changing your Sphinx builder from html
to dirhtml
When you change your Sphinx builder from html
to dirhtml
,
all your URLs will change from /page.html
to /page/
.
You can create a redirect of type HTML to clean URL
to redirect all your old URLs to the new style.
Analytics for search and traffic
Read the Docs supports analytics for search and traffic. When someone reads your documentation, we collect data about the vist and the referer with full respect of the privacy of the visitor.
Traffic analytics
Read the Docs aggregates statistics about visits to your documentation.
This is mainly information about how often pages are viewed,
and which return a 404 Not Found
error code.
Traffic Analytics lets you see which documents your users are reading. This allows you to understand how your documentation is being used, so you can focus on expanding and updating parts people are reading most.
If you require more detailed analytics, Read the Docs has native support for Google Analytics. It’s also possible to customize your documentation to include other analytics frameworks.
Learn more in How to use traffic analytics.
Search analytics
When someone visits your documentation and uses the built-in server side search feature, Read the Docs will collect analytics on their search term.
Those are aggregated into a simple view of the “Top queries in the past 30 days”. You can also download this data.
This is helpful to optimize your documentation in alignment with your readers’ interests. You can discover new trends and expand your documentation to new needs.
Learn more in How to use search analytics.
Security logs
Security logs allow you to audit what has happened recently in your organization or account. This feature is quite important for many security compliance programs, as well as the general peace of mind of knowing what is happening on your account. We store the IP address and the browser used on each event, so that you can confirm this access was from the intended person.
Security logs are only visible to organization owners. You can invite other team members as owners.
See also
- Security policy
General information and reference about how security is handled on Read the Docs.
User security log
We store a user security log for the latest 90 days of activity. This log is useful to validate that no unauthorized events have occurred.
The security log tracks the following events:
Authentication on the dashboard.
Authentication on documentation pages (Business hosting only).
When invitations to manage a project are sent, accepted, revoked or declined.
Authentication failures and successes are both tracked.
Logs are available in
.Organization security log
Note
This feature is only available on Read the Docs for Business.
The length of log storage varies with your plan, check our pricing page for more details. Your organization security log is a great place to check periodically to ensure there hasn’t been unauthorized access to your organization.
Organization logs track the following events:
Authentication on documentation pages from your organization.
User accesses a documentation page from your organization (Enterprise plans only).
User accesses a documentation’s downloadable formats (Enterprise plans only).
Invitations to organization teams are sent, revoked or accepted.
Authentication failures and successes are both tracked.
Logs are available in
.If you have any additional information that you wished the security log was capturing, you can always reach out to Site support.
See also
- Security reports
Security information related to our own platform, personal data treatment, and how to report a security issue.
Status badges
Status badges let you show the state of your documentation to your users. They will show if the latest build has passed, failed, or is in an unknown state. They are great for embedding in your README, or putting inside your actual doc pages.
You can see a badge in action in the Read the Docs README.
Display states
Badges have the following states which can be shown to users:
Green:
passing
- the last build was successful.Red:
failing
- the last build failed.Yellow:
unknown
- we couldn’t figure out the status of your last build.
An example of each is shown here:
Automatically generated
On the dashboard of a project, an example badge is displayed together with code snippets for reStructuredText, Markdown, and HTML.
Badges are generated on-demand for all Read the Docs projects, using the following URL syntax:
https://readthedocs.org/projects/<project-slug>/badge/?version=<version>&style=<style>
Style
You can pass the style
GET argument to get custom styled badges.
This allows you to match the look and feel of your website.
By default, the flat
style is used.
Style |
Badge |
---|---|
flat - default |
|
flat-square |
|
for-the-badge |
|
plastic |
|
social |
Version-specific badges
You can change the version of the documentation your badge points to.
To do this, you can pass the version
GET argument to the badge URL.
By default, it will point at the default version you have specified for your project.
The badge URL looks like this:
https://readthedocs.org/projects/<project-slug>/badge/?version=latest
Badges on dashboard pages
On each project home page there is a badge that communicates the status of the default version. If you click on the badge icon, you will be given snippets for reStructuredText, Markdown, and HTML to make embedding it easier.
Badges for private projects
Note
This feature is only available on Read the Docs for Business.
For private projects, a badge URL cannot be guessed. A token is needed to display it. Private badge URLs are displayed on a private project’s dashboard in place of public URLs.
How to structure your documentation
A documentation project’s ultimate goal is to be read and understood by a reader. Readers need to be able to discover the information that they need. Without an defined structure, readers either won’t find information that they need or get frustrated on the way.
One of the largest benefits of a good structure is that it removes questions that keep authors from writing documentation. Starting with a blank page is often the hardest part of documentation, so anything we can do to remove this problem is a win.
Choosing a structure
Good news! You don’t have to invent all of the structure yourself, since a lot of experience-based work has been done to come up with a universal documentation structure.
In order to avoid starting with a blank page, we recommend a simple process:
Choose a structure for your documentation. We recommend Diátaxis for this.
Find a example project or template to start from.
Start writing by filling in the structure.
This process helps you get started quickly, and helps keep things consistent for the reader of your documentation.
Diátaxis Methodology
The documentation you’re reading is written using the Diátaxis framework. It has four major parts as summarized by this image:

We recommend that you read more about it in the official Diátaxis documentation.
Explaining the structure to your users
One of the benefits of Diátaxis is that it’s a well-known structure, and users might already be familiar with it. As long as you stick to the structure, your users should be able to use existing experience to navigate.
Using the names that are defined (eg. Tutorials, Explanation) in a user-facing way also helps here.
Best practices for linking to your documentation
Once you start to publish documentation, external sources will inevitably link to specific pages in your documentation.
Sources of incoming links vary greatly depending on the type of documentation project that is published. They can include everything from old emails to GitHub issues, wiki articles, software comments, PDF publications, or StackOverflow answers. Most of these incoming sources are not in your control.
Read the Docs makes it easier to create and manage incoming links by redirecting certain URLs automatically and giving you access to define your own redirects.
In this article, we explain how our built-in redirects work and what we consider “best practice” for managing incoming links.
See also
- Versions
Read more about how to handle versioned documentation and URL structures.
- Redirects
Overview of all the redirect features available on Read the Docs. Many of the redirect features are useful either for building external links or handling requests to old URLs.
- How to use custom URL redirects in documentation projects
How to add a user-defined redirect, step-by-step. Useful if your content changes location!
Best practice: “permalink” your pages
You might be familiar with the concept of permalinks from blogging. The idea is that a blog post receives a unique link as soon as it’s published and that the link does not change afterward. Incoming sources can reference the blog post even though the blog changes structure or the post title changes.
When creating an external link to a specific documentation page, chances are that the page is moved as the documentation changes over time.
How should a permalink look for a documentation project? Firstly, you should know that a permalink does not really exist in documentation but it is the result of careful actions to avoid breaking incoming links.
As a documentation owner, you most likely want users clicking on incoming links to see the latest version of the page.
Good practice ✅
Use page redirects if you are linking to the page in the default version of the default language. This allows links to continue working even if those defaults change.
If you move a page that likely has incoming references, create a custom redirect rule.
Links to other Sphinx projects should use intersphinx.
Use minimal filenames that don’t require renaming often.
When possible, keep original file names rather than going for low-impact URL renaming. Renaming an article’s title is great for the reader and great for SEO, but this does not have to involve the URL.
Establish your understanding of the latest and default version of your documentation at the beginning. Changing their meaning is very disruptive to incoming links.
Keep development versions hidden so people do not find them on search engines by mistake. This is the best way to ensure that nobody links to URLs that are intended for development purposes.
Use a version warning to ensure the reader is aware in case they are reading an old (archived) version.
Tip
- Using Sphinx?
If you are using
:ref:
for cross-referencing, you may add as many reference labels to a headline as you need, keeping old reference labels. This will make refactoring documentation easier and avoid that external projects referencing your documentation through intersphinx break.
Questionable practice 🟡
Avoid using specific versions in links unless users need that exact version. Versions get outdated.
Avoid using a public
latest
for development versions and do not make your default version a development branch. Publishing development branches can mean that users are reading instructions for unreleased software or draft documentation.
Tip
- 404 pages are also okay!
If documentation pages have been removed or moved, it can make the maintainer of the referring website aware that they need to update their link. Users will be aware that the documentation project still exists but has changed.
The default Read the Docs 404 page is designed to be helpful, but you can also design your own, see 404 Not Found pages.
Security considerations for documentation pages
This article explains the security implications of documentation pages, this doesn’t apply to the main dashboard (readthedocs.org/readthedocs.com), only to documentation pages (readthedocs.io, readthedocs-hosted.com, and custom domains).
See also
- Cross-site requests
Learn about cross-origin requests in our public APIs.
Cross-origin requests
Read the Docs allows cross-origin requests for documentation resources it serves.
However, internal and proxied APIs, typically found under the /_/
path don’t allow cross-origin requests.
To facilitate this, the following headers are added to all responses from documentation pages:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, HEAD, OPTIONS
These headers allow cross-origin requests from any origin and only allow the GET, HEAD and OPTIONS methods. It’s important to note that passing credentials (such as cookies or HTTP authentication) in cross-origin requests is not allowed, ensuring access to public resources only.
Having cross-origin requests enabled allows third-party websites to make use of files from your documentation (as long as they are public), which allows various third-party integrations to work.
If needed, the Access-Control
headers can be changed for your documentation pages by contacting support.
You are responsible for providing the correct values for these headers, and making sure they don’t break your documentation pages.
Embedding documentation pages
Embedding documentation pages in an iframe is allowed.
Read the Docs doesn’t set the X-Frame-Options
or Content-Security-Policy
headers,
which means that the browser’s default behavior is used.
Embedding private documentation pages in an iframe is possible, but it requires users to be previously authenticated in the embedded domain.
It’s important to note that embedding documentation pages in an iframe does not grant the parent page access the iframe’s content. Documentation pages serve static content only, and the exposed APIs are read-only, making the exploitation of a clickjacking vulnerability very unlikely.
If needed, the X-Frame-Options
and Content-Security-Policy
headers can be set on your documentation pages by contacting support.
You are responsible for providing the correct values for these headers, and making sure they don’t break your documentation pages.
Business hosting
We offer Read the Docs for Business for building and hosting commercial documentation.
Our commercial solutions are provided as a set of subscriptions that are paid and managed through an online interface. In order to get started quickly and easily, a trial option is provided free of charge.
See also
- Read the Docs website: Features
A high-level overview of platform features is available on our website, and the pricing page has a feature breakdown by subscription level.
- Read the Docs website: Company
Information about the company running Read the Docs, including our mission, team, and community.
Commercial documentation solutions
In addition to providing the same features as Read the Docs Community, commercial subscriptions to Read the Docs add additional features and run on separate infrastructure.

Read the Docs for Community and Business are a combined system: All features developed for community benefit the business solution, and most solutions developed for business users are implemented for the community.
The following list is a high-level overview of the areas covered by Read the Docs for Business. If you want a full feature breakdown, please refer to our pricing page.
- Private repositories and private documentation
The largest difference between the community solution and our commercial offering is the ability to connect to private repositories, to restrict documentation access to certain users, or to share private documentation via private hyperlinks.
- Additional build resources
Do you have a complicated build process that uses large amounts of CPU, memory, disk, or networking resources? Commercial subscriptions offer more resources which results in faster documentation build times. We can also customize builders, such as with a GPU or multiple CPUs.
- Priority support
We have a dedicated support team that responds to support requests during business hours.
- Advertising-free
All commercially hosted documentation is always ad-free.
- Business features
Enjoy additional functionality specifically for larger organizations such as team management, single-sign on, and audit logging.
Organizations
Note
This feature is only available on Read the Docs for Business.
In this article, we explain how the organizations feature on Read the Docs allows you to manage access to your projects. On Read the Docs for Business, your account is linked to an organization. Organizations allow you to define both individual and team permissions for your projects.
In this article, we use ACME Corporation as our example organization. ACME has a few people inside their organization, some who need full access and some who just need access to one project.
See also
- How to manage Read the Docs teams
A step-by-step guide to managing teams.
Member types
Owners – Get full access to both view and edit the Organization and all Projects
Members – Get access to a subset of the Organization projects
Teams – Where you give members access to a set of projects.
The best way to think about this relationship is:
Owners will create Teams to assign permissions to all Members.
Warning
Owners, Members and Teams behave differently if you are using Single Sign-on with GitHub, Bitbucket, or GitLab.
Team types
You can create two types of Teams:
Admins – These teams have full access to administer the projects in the team. They are allowed to change all of the settings, set notifications, and perform any action under the Admin tab.
Read Only – These teams are only able to read and search inside the documents.
Example
ACME would set up Owners of their organization, for example Frank Roadrunner would be an owner. He has full access to the organization and all projects.
Wile E. Coyote is a contractor, and will just have access to the new project Road Builder.
Roadrunner would set up a Team called Contractors. That team would have Read Only access to the Road Builder project. Then he would add Wile E. Coyote to the team. This would give him access to just this one project inside the organization.
Single Sign-On (SSO)
Note
This feature is only available on Read the Docs for Business.
Single sign-on is an optional feature on Read the Docs for Business for all users. By default, you will use teams within Read the Docs to manage user authorization. SSO will allow you to grant permissions to your organization’s projects in an easy way.
Currently, we support two different types of single sign-on:
Authentication and authorization are managed by the identity provider (GitHub, Bitbucket or GitLab)
Authentication (only) is managed by the identity provider (Google Workspace account with a verified email address)
Users can log out by using the Log Out link in the RTD flyout menu.
Single Sign-on with GitHub, Bitbucket, or GitLab
Using an identity provider that supports authentication and authorization allows organization owners to manage who has access to projects on Read the Docs, directly from the provider itself. If a user needs access to your documentation project on Read the Docs, that user just needs to be granted permissions in the Git repository associated with the project.
Once you enable this option, your existing Read the Docs teams will not be used. All authentication will be done using your git provider, including any two-factor authentication and additional Single Sign-on that they support.
Learn how to configure this SSO method with our How to setup Single Sign-On (SSO) with GitHub, GitLab, or Bitbucket.
SSO with Google Workspace
This feature allows you to easily manage access to users with a specific email address (e.g. employee@company.com
),
where company.com
is a registered Google Workspace domain.
As this identity provider does not provide information about which projects a user has access to,
permissions are managed by the internal Read the Docs’s teams authorization system.
This feature is only available on the Pro plan and above. Learn how to configure this SSO method with our How to setup Single Sign-On (SSO) with Google Workspace.
Requesting additional providers
We are always interested in hearing from our users about what authentication needs they have. You can reach out to our Site support to talk with us about any additional authentication needs you might have.
Tip
Many additional providers can be supported via GitHub, Bitbucket, and GitLab SSO. We will depend on those sites in order to authenticate you, so you can use all your existing SSO methods already configured on those services.
How to manage your subscription
We want to make it easy to manage your billing information. All organization owners can manage the subscription for that organization. It’s easy to achieve a number of common tasks in this dashboard:
Update your credit card information.
Upgrade, downgrade, or cancel your plan.
View, download, and pay invoices.
Add additional tax (VAT/EIN) or contact email addresses on your invoices.
You can always find our most up to date pricing information on our pricing page.
Managing your subscription
Navigate to the subscription management page.
Click Manage Subscription.
This action will take you to the Stripe billing portal where you can manage your subscription.
Note
You will need to be an organization owner to view subscription information. If you do not have permission, you can ask one of your existing organization owners to make any required changes.
Cancelling your subscription
Cancelling your subscription can be done following the instructions in Managing your subscription. Your subscription will remain active for the remainder of the current billing period, and will not renew for the next billing period.
We can not cancel subscriptions through an email request, as email is an insecure method of verifying a user’s identity. If you email us about this, we require you to verify your identity by logging into your Read the Docs account and submitting an official support request there.
Billing information
We provide both monthly and annual subscriptions for all plans. Annual plans are given a 2 month discount compared to monthly billing. We only support credit card billing for our Basic and Advanced plans. For our Pro and Enterprise users, we support invoice-based and PO billing.
Tip
We recommend paying by credit card for all users, as this greatly simplifies the billing process.
Discounts and credits
We do not generally discount our software. We provide an ad-supported service to the community with Read the Docs Community, but we do have one standard discount that we offer.
Non-profit and academic organizations
Our community hosting, provided for free and open source projects, is generally where we recommend most academic projects to host their projects. If you have constraints on how public your documentation can be, our commercial hosting is probably a better fit.
We offer a 50% discount on our all of our commercial plans to certified academic and non-profit organizations. Please contact Site support to request this discount.
How-to guides: project setup and configuration
The following how-to guides help you solve common tasks and challenges in the setup and configuration stages.
- ⏩️ Connecting your Read the Docs account to your Git provider
Steps to connect an account on GitHub, Bitbucket, or GitLab with your Read the Docs account.
- ⏩️ Configuring a Git repository automatically
Once your account is connected to your Git provider, adding and configuring a Git repository automatically is possible for GitHub, Bitbucket, and GitLab.
- ⏩️ Configuring a Git repository manually
If you are connecting a Git repository from another provider (for instance Gitea or Codeberg), this guide tells you how to add and configure the webhook manually.
- ⏩️ Managing custom domains
Hosting your documentation using your own domain name, such as
docs.example.com
.- ⏩️ Using custom URL redirects in documentation projects
Configuring your Read the Docs project for redirecting visitors from one location to another.
- ⏩️ Managing subprojects
Need several projects under the same umbrella? Start using subprojects, which is a way to host multiple projects under a “main project”.
- ⏩️ Using a .readthedocs.yaml file in a sub-folder
This guide shows how to configure a Read the Docs project to use a custom path for the
.readthedocs.yaml
build configuration. Monorepos that have multiple documentation projects in the same Git repository can benefit from this feature.- ⏩️ Hiding a version
Is your version (flyout) menu overwhelmed and hard to navigate? Here’s how to make it shorter.
- ⏩️ Changing the versioning scheme of your project
Change how the URLs of your documentation look like, and if your project supports multiple versions or translations.
See also
- Read the Docs tutorial
All you need to know to get started.
How to connect your Read the Docs account to your Git provider
In this how to article, you are shown the steps to connect an account on GitHub, Bitbucket, or GitLab with your Read the Docs account. This is relevant if you have signed up for Read the Docs with your email or if you have signed up using a Git provider account and want to connect additional providers.
If you are going to import repositories from GitHub, Bitbucket, or GitLab, you should connect your Read the Docs account to your Git provider first.
Note
If you signed up or logged in to Read the Docs with your GitHub, Bitbucket, or GitLab credentials, you’re all done. Your account is connected ✅️. You only need this how-to if you want to connect additional Git providers.
Adding a connection
To connect your Read the Docs account with a Git provider, go to the main login menu: <Username dropdown> > Settings > Connected Services.
From here, you’ll be able to connect to your GitHub, Bitbucket, or GitLab account. This process will ask you to authorize an integration with Read the Docs.

An example of how your OAuth dialog on GitHub may look.
After approving the request, you will be taken back to Read the Docs. You will now see the account appear in the list of connected services.

Connected Services [1] [2] shows the list of Git providers that
Now your connection is ready and you will be able to import and configure Git repositories with just a few clicks.
See also
- How to automatically configure a Git repository
Learn how the connected account is used for automatically configuring Git repositories and Read the Docs projects and which permissions that are required from your Git provider.
Removing a connection
You may at any time delete the connection from Read the Docs. Delete the connection makes Read the Docs forget the immediate access, but you should also disable our OAuth Application from your Git provider.
On GitHub, navigate to Authorized OAuth Apps.
On Bitbucket, navigate to Application Authorizations.
On GitLab, navigat to Applications
How to automatically configure a Git repository
In this article, we explain how connecting your Read the Docs account to GitHub, Bitbucket, or GitLab makes Read the Docs able to automatically configure your imported Git repositories and your Read the Docs projects.
- ✅️ Signed up with your Git provider?
If you signed up or logged in to Read the Docs with your GitHub, Bitbucket, or GitLab credentials, you’re all done. Your account is connected.
The rest of this guide helps to understand how the automatic configuration works.
- ⏩️️ Signed up with your email address?
If you have signed up to Read the Docs with your email address, you can add the connection to the Git provider afterwards. You can also add a connection to an additional Git provider this way.
Please follow How to connect your Read the Docs account to your Git provider in this case.
Automatic configuration
When your Read the Docs account is connected to GitHub, Bitbucket, or GitLab and you import a Git repository, the integration will automatically be configured on the Read the Docs project and on your Git repository.
Here is an outline of what happens:
A list of repositories that you have access to are automatically listed on Read the Docs’ project import.
You choose a Git repository from the list (see Importing your documentation).
Data about the repository is now fetched using the account connection and you are asked to confirm the setup.
When Read the Docs creates your project, it automatically sets up an integration with the Git provider, and creates an incoming webhook whereby Read the Docs is notified of all future changes to commits, branches and tags in the Git repository.
Your project’s incoming webhook is automatically added to your Git repository’s settings using the account connection.
Read the Docs also configures your project to use the Git provider’s webhook via your account connection, so your project is ready to enable Pull Request builds.
After the import, you can continue to configure the project. All settings can be modified, including the ones that were automatically created.
See also
- Manually import your docs
Using a different provider? Read the Docs still supports other providers such as Gitea or GitHub Enterprise. In fact, any Git repository URL can be configured manually.
Tip
A single Read the Docs account can connect to many different Git providers. This allows you to have a single login for all your various identities.
How does the connection work?
Read the Docs uses OAuth to connect to your account at GitHub, Bitbucket, or GitLab, You are asked to grant permissions for Read the Docs to perform a number of actions on your behalf.
At the same time, we use this process for authentication (login) since we trust that GitHub, Bitbucket, or GitLab have verified your user account and email address.
By granting Read the Docs the requested permissions, we are issued a secret OAuth token from your Git provider.
Using the secret token, we can automatically configure the repository that you select in the project import. We also use the token to send back build statuses and preview URLs for pull requests.
Note
Access granted to Read the Docs can always be revoked. This is a function offered by all Git providers.
Git provider integrations
If your project is using Organizations (Read the Docs for Business) or maintainers (Read the Docs Community), then you need to be aware of who is setting up the integration for the project.
The Read the Docs user who sets up the project through the automatic import should also have admin rights to the Git repository.
A Git provider integration is active through the authentication of the user that creates the integration. If this user is removed, make sure to verify and potentially recreate all Git integrations for the project.
Permissions for connected accounts
Read the Docs does not generally ask for write permission to your repository code (with one exception detailed below) and since we only connect to public repositories we don’t need special permissions to read them. However, we do need permissions for authorizing your account so that you can login to Read the Docs with your connected account credentials and to setup Continuous Documentation Deployment which allow us to build your documentation on every change to your repository.
Read the Docs requests the following permissions (more precisely, OAuth scopes) when connecting your Read the Docs account to GitHub.
- Read access to your email address (
user:email
) We ask for this so you can create a Read the Docs account and login with your GitHub credentials.
- Administering webhooks (
admin:repo_hook
) We ask for this so we can create webhooks on your repositories when you import them into Read the Docs. This allows us to build the docs when you push new commits.
- Read access to your organizations (
read:org
) We ask for this so we know which organizations you have access to. This allows you to filter repositories by organization when importing repositories.
- Repository status (
repo:status
) Repository statuses allow Read the Docs to report the status (eg. passed, failed, pending) of pull requests to GitHub. This is used for a feature currently in beta testing that builds documentation on each pull request similar to a continuous integration service.
Note
Read the Docs for Business
asks for one additional permission (repo
) to allow access to private repositories
and to allow us to setup SSH keys to clone your private repositories.
Unfortunately, this is the permission for read/write control of the repository
but there isn’t a more granular permission
that only allows setting up SSH keys for read access.
We request permissions for:
- Administering your repositories (
repository:admin
) We ask for this so we can create webhooks on your repositories when you import them into Read the Docs. This allows us to build the docs when you push new commits. NB! This permission scope does not include any write access to code.
- Reading your account information including your email address
We ask for this so you can create a Read the Docs account and login with your Bitbucket credentials.
- Read access to your team memberships
We ask for this so we know which organizations you have access to. This allows you to filter repositories by organization when importing repositories.
- Read access to your repositories
We ask for this so we know which repositories you have access to.
To read more about Bitbucket permissions, see official Bitbucket documentation on API scopes
Like the others, we request permissions for:
Reading your account information (
read_user
)API access (
api
) which is needed to create webhooks in GitLab
GitHub permission troubleshooting
Repositories not in your list to import.
Many organizations require approval for each OAuth application that is used, or you might have disabled it in the past for your personal account. This can happen at the personal or organization level, depending on where the project you are trying to access has permissions from.
You need to make sure that you have granted access to the Read the Docs OAuth App to your personal GitHub account. If you do not see Read the Docs in the OAuth App settings, you might need to disconnect and reconnect the GitHub service.
See also
GitHub docs on requesting access to your personal OAuth for step-by-step instructions.
You need to make sure that you have granted access to the Read the Docs OAuth App to your organization GitHub account. If you don’t see “Read the Docs” listed, then you might need to connect GitHub to your social accounts as noted above.
See also
GitHub doc on requesting access to your organization OAuth for step-by-step instructions.
How to manually configure a Git repository integration
In this guide, you will find the steps to manually integrate your Read the Docs project with any Git provider, including GitHub, Bitbucket, and GitLab.
See also
- How to automatically configure a Git repository
You are now reading the guide to configuring a Git repository manually. If your Read the Docs account is connected to the Git provider, we can setup the integration automatically.
Manual integration setup
You need to configure your Git provider integration to call a webhook that alerts Read the Docs of changes. Read the Docs will sync versions and build your documentation when your Git repository is updated.
Go to the Settings page for your GitHub project
Click Webhooks > Add webhook
For Payload URL, use the URL of the integration on your Read the Docs project, found on the project’s Admin > Integrations page. You may need to prepend https:// to the URL.
For Content type, both application/json and application/x-www-form-urlencoded work
Fill the Secret field with the value from the integration on Read the Docs
Select Let me select individual events, and mark Branch or tag creation, Branch or tag deletion, Pull requests and Pushes events
Ensure Active is enabled; it is by default
Finish by clicking Add webhook. You may be prompted to enter your GitHub password to confirm your action.
You can verify if the webhook is working at the bottom of the GitHub page under Recent Deliveries. If you see a Response 200, then the webhook is correctly configured. For a 403 error, it’s likely that the Payload URL is incorrect.
Go to the Settings > Webhooks > Add webhook page for your project
For URL, use the URL of the integration on Read the Docs, found on the Admin > Integrations page
Under Triggers, Repository push should be selected
Fill the Secret field with the value from the integration on Read the Docs
Finish by clicking Save
Go to the Settings > Webhooks page for your GitLab project
For URL, use the URL of the integration on Read the Docs project, found on the Admin > Integrations page
Fill the Secret token field with the value from the integration on Read the Docs
Leave the default Push events selected, additionally mark Tag push events and Merge request events.
Finish by clicking Add Webhook
These instructions apply to any Gitea instance.
Warning
This isn’t officially supported, but using the “GitHub webhook” is an effective workaround, because Gitea uses the same payload as GitHub. The generic webhook is not compatible with Gitea. See issue #8364 for more details. Official support may be implemented in the future.
On Read the Docs:
Manually create a “GitHub webhook” integration (this will show a warning about the webhook not being correctly set up, that will go away when the webhook is configured in Gitea)
On your Gitea instance:
Go to the Settings > Webhooks page for your project on your Gitea instance
Create a new webhook of type “Gitea”
For URL, use the URL of the integration on Read the Docs, found on the Admin > Integrations page
Leave the default HTTP Method as POST
For Content type, both application/json and application/x-www-form-urlencoded work
Fill the Secret field with the value from the integration on Read the Docs
Select Choose events, and mark Branch or tag creation, Branch or tag deletion and Push events
Ensure Active is enabled; it is by default
Finish by clicking Add Webhook
Test the webhook with Delivery test
Finally, on Read the Docs, check that the warnings have disappeared and the delivery test triggered a build.
Other providers are supported through a generic webhook, see Using the generic API integration.
Additional integration
You can configure multiple incoming webhooks.
To manually set up an integration, go to Admin > Integrations > Add integration dashboard page and select the integration type you’d like to add. After you have added the integration, you’ll see a link to information about the integration.
As an example, the URL pattern looks like this: https://readthedocs.org/api/v2/webhook/<project-name>/<id>/*
.
Use this URL when setting up a new integration with your provider, as explained above.
Warning
Git repositories that are imported manually do not have the required setup to send back a commit status. If you need this integration, you have to configure the repository automatically.
See also
- How to setup email notifications
Quickly enable email notifications.
- How to setup build status webhooks
Learn how to add custom webhook notifications.
Using the generic API integration
For repositories that are not hosted with a supported provider, we also offer a generic API endpoint for triggering project builds. Similar to webhook integrations, this integration has a specific URL, which can be found on the project’s Integrations dashboard page (Admin > Integrations).
Token authentication is required to use the generic endpoint, you will find this token on the integration details page. The token should be passed in as a request parameter, either as form data or as part of JSON data input.
Parameters
This endpoint accepts the following arguments during an HTTP POST:
- branches
The names of the branches to trigger builds for. This can either be an array of branch name strings, or just a single branch name string.
Default: latest
- token
The integration token found on the project’s Integrations dashboard page (Admin > Integrations).
- default_branch
This is the default branch of the repository (ie. the one checked out when cloning the repository without arguments)
Optional
For example, the cURL command to build the dev
branch, using the token
1234
, would be:
curl -X POST -d "branches=dev" -d "token=1234" -d "default_branch=main"
https://readthedocs.org/api/v2/webhook/example-project/1/
A command like the one above could be called from a cron job or from a hook inside Git, Subversion, Mercurial, or Bazaar.
Authentication
This endpoint requires authentication. If authenticating with an integration token, a check will determine if the token is valid and matches the given project. If instead an authenticated user is used to make this request, a check will be performed to ensure the authenticated user is an owner of the project.
Payload validation
All integrations are created with a secret token, this offers a way to verify that a webhook request is legitimate.
This validation is done according to each provider:
Troubleshooting
Debugging webhooks
If you are experiencing problems with an existing webhook, you may be able to use the integration detail page to help debug the issue. Each project integration, such as a webhook or the generic API endpoint, stores the HTTP exchange that takes place between Read the Docs and the external source. You’ll find a list of these exchanges in any of the integration detail pages.
Webhook activation failed. Make sure you have the necessary permissions
If you find this error, make sure your user has permissions over the repository. In case of GitHub, check that you have granted access to the Read the Docs OAuth App to your organization.
My project isn’t automatically building
If your project isn’t automatically building, you can check your integration on Read the Docs to see the payload sent to our servers. If there is no recent activity on your Read the Docs project webhook integration, then it’s likely that your VCS provider is not configured correctly. If there is payload information on your Read the Docs project, you might need to verify that your versions are configured to build correctly.
How to manage custom domains
This guide describes how to host your documentation using your own domain name, such as docs.example.com
.
Adding a custom domain
To setup your custom domain, follow these steps:
Go the Admin tab of your project.
Click on Domains.
Enter the domain where you want to serve the documentation from (e.g.
docs.example.com
).Mark the Canonical option if you want use this domain as your canonical domain.
Click on Add.
At the top of the next page you’ll find the value of the DNS record that you need to point your domain to. For Read the Docs Community this is
readthedocs.io
, and for Business hosting the record is in the form of<hash>.domains.readthedocs.com
.
Once you have completed these steps and your new DNS entry has propagated (usually takes a few minutes), you need to build your project’s published branches again so the Canonical URL is correct.
Note
For a subdomain like
docs.example.com
add a CNAME record, and for a root domain likeexample.com
use an ANAME or ALIAS record.
We provide a validated SSL certificate for the domain, managed by Cloudflare. The SSL certificate issuance should happen within a few minutes, but might take up to one hour. See SSL certificate issue delays for more troubleshooting options.
To see if your DNS change has propagated, you can use a tool like dig
to inspect your domain from your command line.
As an example, our blog’s DNS record looks like this:
dig +short CNAME blog.readthedocs.com
readthedocs.io.
Warning
We don’t support pointing subdomains or root domains to a project using A records. DNS A records require a static IP address and our IPs may change without notice.
Removing a custom domain
To remove a custom domain:
Go the Admin tab of your project.
Click on Domains.
Click the Remove button next to the domain.
Click Confirm on the confirmation page.
Warning
Once a domain is removed, your previous documentation domain is no longer served by Read the Docs, and any request for it will return a 404 Not Found!
Strict Transport Security (HSTS) and other custom headers
By default, we do not return a Strict Transport Security header (HSTS) for user custom domains. This is a conscious decision as it can be misconfigured in a not easily reversible way. For both Read the Docs Community and Read the Docs for Business, HSTS and other custom headers can be set upon request.
We always return the HSTS header with a max-age of at least one year
for our own domains including *.readthedocs.io
, *.readthedocs-hosted.com
, readthedocs.org
and readthedocs.com
.
Note
Please contact Site support if you want to add a custom header to your domain.
Multiple documentation sites as sub-folders of a domain
You may host multiple documentation repositories as sub-folders of a single domain.
For example, docs.example.org/projects/repo1
and docs.example.org/projects/repo2
.
This is a way to boost the SEO of your website.
See also
- Subprojects
Further information about hosting multiple documentation repositories, using the subproject feature.
Troubleshooting
SSL certificate issue delays
The status of your domain validation and certificate can always be seen on the details page for your domain under Admin > Domains > YOURDOMAIN.TLD (details).
Domains are usually validated and a certificate issued within minutes. However, if you setup the domain in Read the Docs without provisioning the necessary DNS changes and then update DNS hours or days later, this can cause a delay in validating because there is an exponential back-off in validation.
Tip
Loading the domain details in the Read the Docs dashboard and saving the domain again will force a revalidation.
The validation process period has ended
After you add a new custom domain, you have 30 days to complete the configuration. Once that period has ended, we will stop trying to validate your domain. If you still want to complete the configuration, go to your domain and click on Save to restart the process.
Migrating from GitBook
If your custom domain was previously used in GitBook, contact GitBook support (via live chat in their website) to remove the domain name from their DNS Zone in order for your domain name to work with Read the Docs, otherwise it will always redirect to GitBook.
How to manage subprojects
This guide shows you how to manage subprojects, which is a way to host multiple projects under a “main project”.
See also
- Subprojects
Read more about what the subproject feature can do and how it works.
Adding a subproject
In the admin dashboard for your project, select Subprojects from the left menu. From this page you can add a subproject by choosing a project from the Subproject dropdown and typing an alias in the Alias field.
Immediately after adding the subproject, it will be hosted at the URL displayed in the updated list of subprojects.

Note
- Read the Docs Community
You need to be maintainer of a subproject in order to choose it from your main project.
- Read the Docs for Business
You need to have admin access to the subproject in order to choose it from your main project.
Editing a subproject
You can edit a subproject at any time by clicking 📝️ in the list of subprojects. On the following page, it’s possible to both change the subproject and its alias using the Subproject dropdown and the Alias field. Click Update subproject to save your changes.
The documentations served at /projects/<subproject-alias>/
will be updated immediately when you save your changes.
Deleting a subproject
You can delete a subproject at any time by clicking 📝️ in the list of subprojects. On the edit page, click Delete subproject.
Your subproject will be removed immediately and will be served from it’s own domain:
Previously it was served at:
<main-project-domain>/projects/<subproject-alias>/
Now it will be served at
<subproject-domain>/
Deleting a subproject only removes the reference from the main project. It does not completely remove that project.
How to hide a version and keep its documentation online
If you manage a project with a lot of versions, the version (flyout) menu of your docs can be easily overwhelmed and hard to navigate.

Overwhelmed flyout menu
You can deactivate the version to remove its docs,
but removing its docs isn’t always an option.
To not list a version in the flyout menu while keeping its docs online, you can mark it as hidden.
Go to the Versions tab of your project, click on Edit and mark the Hidden
option.
Users that have a link to your old version will still be able to see your docs.
And new users can see all your versions (including hidden versions) in the versions tab of your project at https://readthedocs.org/projects/<your-project>/versions/
Check the docs about versions’ states for more information.
How to use a .readthedocs.yaml file in a sub-folder
This guide shows how to configure a Read the Docs project to use a custom path for the .readthedocs.yaml
build configuration.
Monorepos that have multiple documentation projects in the same Git repository can benefit from this feature.
By default,
Read the Docs will use the .readthedocs.yaml
at the top level of your Git repository.
But if a Git repository contains multiple documentation projects that need different build configurations,
you will need to have a .readthedocs.yaml
file in multiple sub-folders.
See also
- sphinx-multiproject
If you are only using Sphinx projects and want to share the same build configuration, you can also use the
sphinx-multiproject
extension.- How to use custom environment variables
You might also be able to reuse the same configuration file across multiple projects, using only environment variables. This is possible if the configuration pattern is very similar and the documentation tool is the same.
Implementation considerations
This feature is currently project-wide. A custom build configuration file path is applied to all versions of your documentation.
Warning
Changing the configuration path will apply to all versions. Different versions of the project may not be able to build again if this path is changed.
Adding an additional project from the same repository
Once you have added the first project from the Import Wizard, it will show as if it has already been imported and cannot be imported again. In order to add another project with the same repository, you will need to use the Manual Import.
Setting the custom build configuration file
Once you have added a Git repository to a project that needs a custom configuration file path, navigate to Build configuration file field.
and add the path to the
After pressing Save, you need to ensure that relevant versions of your documentation are built again.
Tip
Having multiple different build configuration files can be complex. We recommend setting up 1-2 projects in your Monorepo and getting them to build and publish successfully before adding additional projects to the equation.
Next steps
Once you have your monorepo pattern implemented and tested and it’s ready to roll out to all your projects, you should also consider the Read the Docs project setup for these individual projects.
Having individual projects gives you the full flexibility of the Read the Docs platform to make individual setups for each project.
For each project, it’s now possible to configure:
Sets of maintainers (or organizations on Read the Docs for Business)
Additional documentation tools with individual build processes: One project might use Sphinx, while another project setup might use Asciidoctor.
…and much more. All settings for a Read the Docs project is available for each individual project.
See also
- How to manage subprojects
More information on nesting one project inside another project. In this setup, it is still possible to use the same monorepo for each subproject.
Other tips
For a monorepo, it’s not desirable to have changes in unrelated sub-folders trigger new builds.
Therefore,
you should consider setting up conditional build cancellation rules.
The configuration is added in each .readthedocs.yaml
,
making it possible to write one conditional build rules per documentation project in the Monorepo 💯️
How to use custom URL redirects in documentation projects
In this guide, you will learn the steps necessary to configure your Read the Docs project for redirecting visitors from one location to another.
User-defined redirects are issued by our servers when a reader visits an old URL, which means that the reader is automatically redirected to a new URL.
See also
- Best practices for linking to your documentation
The need for a redirect often comes from external links to your documentation. Read more about handling links in this explanation of best practices.
- Redirects
If you want to know more about our implementation of redirects, you can look up more examples in our reference before continuing with the how-to.
Adding a redirect rule
Redirects are configured in the project dashboard, go to
.
After clicking Add Redirect, you need to select a Redirect Type. This is where things get a bit more complicated you need to fill in specific information according to that choice.
Choosing a Redirect Type
There are different types of redirect rules targeting different needs. For each choice in Redirect Type, you can mark the choice in order to experiment and preview the final rule generated.

Here is a quick overview of the options available in Redirect Type:
- Page redirect
With this option, you can specify a page in your documentation to redirect elsewhere. The rule triggers no matter the version of your documentation that the user is visiting. This rule can also redirect to another website.
Read more about this option in Page redirects
- Exact redirect
With this option, you can specify a page in your documentation to redirect elsewhere. The rule is specific to the language and version of your documentation that the user is visiting. This rule can also redirect to another website.
Read more about this option in Exact redirects
- Clean URL to HTML
If you choose to change the style of your URLs from clean URLs (
/en/latest/tutorial/
) to HTML URLs (/en/latest/tutorial.html
), you can redirect all mismatches automatically.Read more about this option in Clean/HTML URLs redirects
- HTML to clean URL
Similarly to the former option, if you choose to change the style of your URLs from HTML URLs (
/en/latest/tutorial.html
) to clean URLs (/en/latest/tutorial/
), you can redirect all mismatches automatically.Read more about this option in Clean/HTML URLs redirects
Note
By default, redirects are followed only if the requested page doesn’t exist (404 File Not Found error). If you need to apply a redirect for files that exist, you can have a Apply even if the page exists option visible. This option is only available on some plan levels. Please ask support to enable it for you.
Defining the redirect rule
As mentioned before, you can pick and choose a Redirect Type that fits your redirect need. When you have entered a From URL and To URL and the redirect preview looks good, you are ready to save the rule.
Saving the redirect
The redirect is not activated before you click Save. Before clicking, you are free to experiment and preview the effects. Your redirect rules is added and effective immediately after saving it.
After adding the rule, you can add more redirects as needed. There are no immediate upper bounds to how many redirect rules a project may define.
Editing and deleting redirect rules
You can always revisit
. in order to delete a rule or edit it.When editing a rule, you can change its Redirect Type and its From URL or To URL.
Changing the order of redirects
The order of redirects is important, if you have multiple rules that match the same URL, the first redirect in the list will be used.
You can change the order of the redirect from the
page, by using the up and down arrow buttons.New redirects are added at the start of the list (i.e. they have the highest priority).
How to change the versioning scheme of your project
In this guide, we show you how to change the versioning scheme of your project on Read the Docs.
Changing the versioning scheme of your project will affect the URLs of your documentation, any existing links to your documentation will break. If you want to keep the old URLs working, you can create redirects.
See also
- URL versioning schemes
Reference of all the versioning schemes supported by Read the Docs.
- Versions
General explanation of how versioning works on Read the Docs.
Changing the versioning scheme
Go the Admin tab of your project.
Click on Settings.
Select the new versioning scheme in the Versioning scheme dropdown.
Click on Save.
How-to guides: build process
- ⏩️ Setup email notifications
Email notifications can alert you when your builds fail. This is the most simple way to monitor your documentation builds, it only requires you to switch it on.
- ⏩️ Setup webhook notifications
Webhook notifications can alert you when your builds fail so you can take immediate action. We show examples of how to use the webhooks on popular platforms like Slack and Discord.
- ⏩️ Configuring pull request builds
Have your documentation built and access a preview for every pull request builds.
- ⏩️ Using custom environment variables
Extra environment variables, for instance secrets, may be needed in the build process and can be defined from the project’s dashboard.
- ⏩️ Managing versions automatically
Automating your versioning on Read the Docs means you only have to handle your versioning logic in Git. Learn how to define rules to automate creation of new versions on Read the Docs, entirely using your Git repository’s version logic.
How to setup email notifications
In this brief guide, you can learn how to setup a simple build notification via email.
Read the Docs allows you to configure emails that will be notified on failing builds. This makes sure that you are aware of failures happening in an otherwise automated process.
See also
- How to setup build status webhooks
How to use webhooks to be notified about builds on popular platforms like Slack and Discord.
- Pull request previews
Configure automated feedback and documentation site previews for your pull requests.
Email notifications
Follow these steps to add an email address to be notified about build failures:
Go to
in your project.Fill in the Email field under the New Email Notifications heading
Press Add and the email is saved and will be displayed in the list of Existing notifications.
The newly added email address will be notified once a build fails.
Note
We don’t send email notifications on builds from pull requests.
How to setup build status webhooks
In this guide, you can learn how to setup build notifications via webhooks.
When a documentation build is triggered, successful or failed, Read the Docs can notify external APIs using webhooks. In that way, you can receive build notifications in your own monitoring channels and be alerted you when your builds fail so you can take immediate action.
See also
- How to setup email notifications
Setup basic email notifications for build failures.
- Pull request previews
Configure automated feedback and documentation site previews for your pull requests.
Build status webhooks
Take these steps to enable build notifications using a webhook:
Go to
in your project.Fill in the URL field and select what events will trigger the webhook
Modify the payload or leave the default (see below)
Click on Save

URL and events for a webhook
Every time one of the checked events triggers, Read the Docs will send a POST request to your webhook URL. The default payload will look like this:
{
"event": "build:triggered",
"name": "docs",
"slug": "docs",
"version": "latest",
"commit": "2552bb609ca46865dc36401dee0b1865a0aee52d",
"build": "15173336",
"start_date": "2021-11-03T16:23:14",
"build_url": "https://readthedocs.org/projects/docs/builds/15173336/",
"docs_url": "https://docs.readthedocs.io/en/latest/"
}
When a webhook is sent, a new entry will be added to the “Recent Activity” table. By clicking on each individual entry, you will see the server response, the webhook request, and the payload.

Activity of a webhook
Note
We don’t trigger webhooks on builds from pull requests.
Custom payload examples
You can customize the payload of the webhook to suit your needs, as long as it is valid JSON. Below you have a couple of examples, and in the following section you will find all the available variables.

Custom payload
{
"attachments": [
{
"color": "#db3238",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Read the Docs build failed*"
}
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": "*Project*: <{{ project.url }}|{{ project.name }}>"
},
{
"type": "mrkdwn",
"text": "*Version*: {{ version.name }} ({{ build.commit }})"
},
{
"type": "mrkdwn",
"text": "*Build*: <{{ build.url }}|{{ build.id }}>"
}
]
}
]
}
]
}
More information on the Slack Incoming Webhooks documentation.
{
"username": "Read the Docs",
"content": "Read the Docs build failed",
"embeds": [
{
"title": "Build logs",
"url": "{{ build.url }}",
"color": 15258703,
"fields": [
{
"name": "*Project*",
"value": "{{ project.url }}",
"inline": true
},
{
"name": "*Version*",
"value": "{{ version.name }} ({{ build.commit }})",
"inline": true
},
{
"name": "*Build*",
"value": "{{ build.url }}"
}
]
}
]
}
More information on the Discord webhooks documentation.
Variable substitutions reference
{{ event }}
Event that triggered the webhook, one of
build:triggered
,build:failed
, orbuild:passed
.{{ build.id }}
Build ID.
{{ build.commit }}
Commit corresponding to the build, if present (otherwise
""
).{{ build.url }}
URL of the build, for example
https://readthedocs.org/projects/docs/builds/15173336/
.{{ build.docs_url }}
URL of the documentation corresponding to the build, for example
https://docs.readthedocs.io/en/latest/
.{{ build.start_date }}
Start date of the build (UTC, ISO format), for example
2021-11-03T16:23:14
.{{ organization.name }}
Organization name (Commercial only).
{{ organization.slug }}
Organization slug (Commercial only).
{{ project.slug }}
Project slug.
{{ project.name }}
Project name.
{{ project.url }}
URL of the project dashboard.
{{ version.slug }}
Version slug.
{{ version.name }}
Version name.
Validating the payload
After you add a new webhook, Read the Docs will generate a secret key for it
and uses it to generate a hash signature (HMAC-SHA256) for each payload
that is included in the X-Hub-Signature
header of the request.

Webhook secret
We highly recommend using this signature to verify that the webhook is coming from Read the Docs. To do so, you can add some custom code on your server, like this:
import hashlib
import hmac
import os
def verify_signature(payload, request_headers):
"""
Verify that the signature of payload is the same as the one coming from request_headers.
"""
digest = hmac.new(
key=os.environ["WEBHOOK_SECRET"].encode(),
msg=payload.encode(),
digestmod=hashlib.sha256,
)
expected_signature = digest.hexdigest()
return hmac.compare_digest(
request_headers["X-Hub-Signature"].encode(),
expected_signature.encode(),
)
Legacy webhooks
Webhooks created before the custom payloads functionality was added to Read the Docs send a payload with the following structure:
{
"name": "Read the Docs",
"slug": "rtd",
"build": {
"id": 6321373,
"commit": "e8dd17a3f1627dd206d721e4be08ae6766fda40",
"state": "finished",
"success": false,
"date": "2017-02-15 20:35:54"
}
}
To migrate to the new webhooks and keep a similar structure, you can use this payload:
{
"name": "{{ project.name }}",
"slug": "{{ project.slug }}",
"build": {
"id": "{{ build.id }}",
"commit": "{{ build.commit }}",
"state": "{{ event }}",
"date": "{{ build.start_date }}"
}
}
Troubleshooting webhooks and payload discovery
You can use public tools to discover, inspect and test webhook integration. These tools act as catch-all endpoints for HTTP requests and respond with a 200 OK HTTP status code. You can use these payloads to develop your webhook services. You should exercise caution when using these tools as you might be sending sensitive data to external tools.
These public tools include:
Beeceptor to create a temporary HTTPS endpoint and inspect incoming payloads. It lets you respond custom response code or messages from named HTTP mock server.
Webhook Tester to inspect and debug incoming payloads. It lets you inspect all incoming requests to it’s URL/bucket.
How to configure pull request builds
In this section, you can learn how to configure pull request builds.
To enable pull request builds for your project, your Read the Docs account needs to be connected to an account with a supported Git provider. See Limitations for more information.
If your account is already connected:
Go to your project dashboard
Go to Admin, then Settings
Enable the Build pull requests for this project option
Click on Save
Tip
Pull requests opened before enabling pull request builds will not trigger new builds automatically. Push a new commit to the pull request to trigger its first build.
Privacy levels
Note
Privacy levels are only supported on Business hosting.
If you didn’t import your project manually and your repository is public, the privacy level of pull request previews will be set to Public, otherwise it will be set to Private. Public pull request previews are available to anyone with the link to the preview, while private previews are only available to users with access to the Read the Docs project.
Warning
If you set the privacy level of pull request previews to Private, make sure that only trusted users can open pull requests in your repository.
Setting pull request previews to private on a public repository can allow a malicious user to access read-only APIs using the user’s session that is reading the pull request preview. Similar to GHSA-pw32-ffxw-68rh.
To change the privacy level:
Go to your project dashboard
Go to Admin, then Settings
Select your option in Privacy level of builds from pull requests
Click on Save
Privacy levels work the same way as normal versions.
Limitations
Only available for GitHub and GitLab currently. Bitbucket is not yet supported.
To enable this feature, your Read the Docs account needs to be connected to an account with your Git provider.
Builds from pull requests have the same memory and time limitations as regular builds.
Additional formats like PDF and EPUB aren’t built, to reduce build time.
Search queries will default to the default experience for your tool. This is a feature we plan to add, but don’t want to overwhelm our search indexes used in production.
The built documentation is kept for 90 days after the pull request has been closed or merged.
Troubleshooting
- No new builds are started when I open a pull request
The most common cause is that your repository’s webhook is not configured to send Read the Docs pull request events. You’ll need to re-sync your project’s webhook integration to reconfigure the Read the Docs webhook.
To resync your project’s webhook, go to your project’s admin dashboard, Integrations, and then select the webhook integration for your provider. Follow the directions on to re-sync the webhook, or create a new webhook integration.
You may also notice this behavior if your Read the Docs account is not connected to your Git provider account, or if it needs to be reconnected. You can (re)connect your account by going to your <Username dropdown>, Settings, then to Connected Services.
- Build status is not being reported to your Git provider
If opening a pull request does start a new build, but the build status is not being updated with your Git provider, then your connected account may have out dated or insufficient permisisons.
Make sure that you have granted access to the Read the Docs OAuth App for your personal or organization GitHub account. You can also try reconnecting your account with your Git provider.
How to use custom environment variables
If extra environment variables are needed in the build process, you can define them from the project’s dashboard.
See also
- Environment variable overview
Learn more about how Read the Docs applies environment variables in your builds.
Go to your project’s Add Environment Variable. You will then see the form for adding an environment variable:
and click on
Fill in the Name field, this is the name of your variable, for instance
SECRET_TOKEN
orPIP_EXTRA_INDEX_URL
.Fill in the Value field with the environment variable’s value, for instance a secret token or a build configuration.
Check the Public option if you want to expose this environment variable to builds from pull requests.
Warning
If you make your environment variable public, any user that can create a pull request on your repository will be able to see the value of this environment variable. In other words, do not use this option if your environment variable is a secret.
Finally, click on Save. Your custom environment variable is immediately active for all future builds and you are ready to start using it.
Note that once you create an environment variable, you won’t be able to edit or view its value. The only way to edit an environment variable is to delete and create it again.
Keep reading for a few examples of using environment variables. ⬇️
Accessing environment variables in code
After adding an environment variable, you can read it from your build process, for example in your Sphinx’s configuration file:
import os
import requests
# Access to our custom environment variables
username = os.environ.get("USERNAME")
password = os.environ.get("PASSWORD")
# Request a username/password protected URL
response = requests.get(
"https://httpbin.org/basic-auth/username/password",
auth=(username, password),
)
Accessing environment variables in build commands
You can also use any of these variables from user-defined build jobs in your project’s configuration file:
version: 2
build:
os: ubuntu-22.04
tools:
python: 3.10
jobs:
post_install:
- curl -u ${USERNAME}:${PASSWORD} https://httpbin.org/basic-auth/username/password
Note
If you use ${SECRET_ENV}
in a command in .readthedocs.yaml
,
the private value of the environment variable is not substituted in log entries of the command.
It will also be logged as ${SECRET_ENV}
.
How to manage versions automatically
In this guide, we show you how to define rules to automate creation of new versions on Read the Docs, using your Git repository’s version logic. Automating your versioning on Read the Docs means you only have to handle your versioning logic in Git.
See also
- Versions
Learn more about versioning of documentation in general.
- Automation rules
Reference for all different rules and actions possible with automation.
Adding a new automation rule
First you need to go to the automation rule creation page:
Navigate to
.Click on Add Rule and you will see the following form.

In the Automation Rule form, you need to fill in 4 fields:
Enter a Description that you can refer to later. For example, entering “Create new stable version” is a good title, as it explains the intention of the rule.
Choose a Match, which is the pattern you wish to detect in either a Git branch or tag.
Any version matches all values.
SemVer versions matches only values that have the SemVer format.
Custom match matches your own pattern (entered below). If you choose this option, a field Custom match will automatically appear below the drop-down where you can add a regular expression in Python regex format.
Choose a Version type. You can choose between Tag or Branch, denoting Git tag or Git branch.
Finally, choose the Action:
Now your rule is ready and you can press Save. The rule takes effect immediately when a new version is created, but does not apply to old versions.
Tip
- Examples of common usage
See the list of examples for rules that are commonly used.
- Want to test if your rule works?
If you are using Git in order to create new versions, create a Git tag or branch that matches the rule and check if your automation action is triggered. After the experiment, you can delete both from Git and Read the Docs.
Ordering your rules
The order your rules are listed in
matters. Each action will be performed in that order, so earlier rules have a higher priority.You can change the order using the up and down arrow buttons.
Note
New rules are added at the start of the list (i.e. they have the highest priority).
How-to guides: upgrading and maintaining projects
- ⏩️ Creating reproducible builds
Using Sphinx, themes and extensions means that your documentation project needs to fetch a set of dependencies, each with a special version. Over time, using an unspecified version means that documentation projects suddenly start breaking. In this guide, you learn how to secure your project against sudden breakage. This is one of our most popular guides!
- ⏩️ Using Conda as your Python environment
Read the Docs supports Conda and Mamba as an environment management tools. In this guide, we show the practical steps of using Conda or Mamba.
How to use Conda as your Python environment
Read the Docs supports Conda as an environment management tool, along with Virtualenv. Conda support is useful for people who depend on C libraries, and need them installed when building their documentation.
This work was funded by Clinical Graphics – many thanks for their support of open source.
Activating conda
Conda support is available using a Configuration file overview, see conda.
Our Docker images use Miniconda, a minimal conda installer.
After specifying your project requirements using a conda environment.yml
file,
Read the Docs will create the environment (using conda env create
)
and add the core dependencies needed to build the documentation.
Creating the environment.yml
There are several ways of exporting a conda environment:
conda env export
will produce a complete list of all the packages installed in the environment with their exact versions. This is the best option to ensure reproducibility, but can create problems if done from a different operative system than the target machine, in our case Ubuntu Linux.conda env export --from-history
will only include packages that were explicitly requested in the environment, excluding the transitive dependencies. This is the best option to maximize cross-platform compatibility, however it may include packages that are not needed to build your docs.And finally, you can also write it by hand. This allows you to pick exactly the packages needed to build your docs (which also results in faster builds) and overcomes some limitations in the conda exporting capabilities.
For example, using the second method for an existing environment:
$ conda activate rtd38
(rtd38) $ conda env export --from-history | tee environment.yml
name: rtd38
channels:
- defaults
- conda-forge
dependencies:
- rasterio==1.2
- python=3.8
- pytorch-cpu=1.7
prefix: /home/docs/.conda/envs/rtd38
Read the Docs will override the name
and prefix
of the environment when creating it,
so they can have any value, or not be present at all.
Tip
Bear in mind that rasterio==1.2
(double ==
) will install version 1.2.0
,
whereas python=3.8
(single =
) will fetch the latest 3.8.*
version,
which is 3.8.8
at the time of writing.
Effective use of channels
Conda packages are usually hosted on https://anaconda.org/, a registration-free artifact archive maintained by Anaconda Inc. Contrary to what happens with the Python Package Index, different users can potentially host the same package in the same repository, each of them using their own channel. Therefore, when installing a conda package, conda also needs to know which channels to use, and which ones take precedence.
If not specified, conda will use defaults
, the channel maintained by Anaconda Inc.
and subject to Anaconda Terms of Service. It contains well-tested versions of the most widely used
packages. However, some packages are not available on the defaults
channel,
and even if they are, they might not be on their latest versions.
As an alternative, there are channels maintained by the community that have a broader selection
of packages and more up-to-date versions of them, the most popular one being conda-forge
.
To use the conda-forge
channel when specifying your project dependencies, include it in the list
of channels
in environment.yml
, and conda will rank them in order of appearance.
To maximize compatibility, we recommend putting conda-forge
above defaults
:
name: rtd38
channels:
- conda-forge
- defaults
dependencies:
- python=3.8
# Rest of the dependencies
Tip
If you want to opt out the defaults
channel completely, replace it by nodefaults
in the list of channels. See the relevant conda docs for more information.
Making builds faster with mamba
One important thing to note is that, when enabling the conda-forge
channel,
the conda dependency solver requires a large amount of RAM and long solve times.
This is a known issue due to the sheer amount of packages available in conda-forge.
As an alternative, you can instruct Read the Docs to use mamba, a drop-in replacement for conda that is much faster and reduces the memory consumption of the dependency solving process.
To do that, add a .readthedocs.yaml
configuration file
with these contents:
version: 2
build:
os: "ubuntu-20.04"
tools:
python: "mambaforge-22.9"
conda:
environment: environment.yml
You can read more about the build.tools.python configuration in our documentation.
Mixing conda and pip packages
There are valid reasons to use pip inside a conda environment: some dependency
might not be available yet as a conda package in any channel,
or you might want to avoid precompiled binaries entirely.
In either case, it is possible to specify the subset of packages
that will be installed with pip in the environment.yml
file. For example:
name: rtd38
channels:
- conda-forge
- defaults
dependencies:
- rasterio==1.2
- python=3.8
- pytorch-cpu=1.7
- pip>=20.1 # pip is needed as dependency
- pip:
- black==20.8b1
The conda developers recommend in their best practices to install as many requirements as possible with conda, then use pip to minimize possible conflicts and interoperability issues.
Warning
Notice that conda env export --from-history
does not include packages installed with pip,
see this conda issue for discussion.
Compiling your project sources
If your project contains extension modules written in a compiled language (C, C++, FORTRAN) or server-side JavaScript, you might need special tools to build it from source that are not readily available on our Docker images, such as a suitable compiler, CMake, Node.js, and others.
Luckily, conda is a language-agnostic package manager, and many of these development tools
are already packaged on conda-forge
or more specialized channels.
For example, this conda environment contains the required dependencies to compile Slycot on Read the Docs:
name: slycot38
channels:
- conda-forge
- defaults
dependencies:
- python=3.8
- cmake
- numpy
- compilers
Troubleshooting
If you have problems on the environment creation phase, either because the build runs out of memory or time or because some conflicts are found, you can try some of these mitigations:
Reduce the number of channels in
environment.yml
, even leavingconda-forge
only and opting out of the defaults addingnodefaults
.Constrain the package versions as much as possible to reduce the solution space.
Use mamba, an alternative package manager fully compatible with conda packages.
And, if all else fails, request more resources.
Custom Installs
If you are running a custom installation of Read the Docs,
you will need the conda
executable installed somewhere on your PATH
.
Because of the way conda
works,
we can’t safely install it as a normal dependency into the normal Python virtualenv.
Warning
Installing conda into a virtualenv will override the activate
script,
making it so you can’t properly activate that virtualenv anymore.
How-to guides: content, themes and SEO
- ⏩️ Search engine optimization (SEO) for documentation projects
This article explains how documentation can be optimized to appear in search results, ultimately increasing traffic to your docs.
- ⏩️ Enabling canonical URLs
In this guide, we introduce relevant settings for enabling canonical URLs in popular documentation frameworks.
- ⏩️ Using traffic analytics
In this guide, you can learn to use Read the Docs’ built-in traffic analytics for your documentation project. You will also learn how to optionally add your own Google Analytics account or completely disable Google Analytics on your project.
- ⏩️ Managing translations for Sphinx projects
This guide walks through the process needed to manage translations of your documentation. Once this work is done, you can setup your project under Read the Docs to build each language of your documentation by reading Localization and Internationalization.
- ⏩️ Supporting Unicode in Sphinx PDFs
Sphinx offers different LaTeX engines that have better support for Unicode characters, relevant for instance for Japanese or Chinese.
- ⏩️ Cross-referencing with Sphinx
When writing documentation you often need to link to other pages of your documentation, other sections of the current page, or sections from other pages.
- ⏩️ Linking to other projects with Intersphinx
This section shows you how to maintain references to named sections of other external Sphinx projects.
- ⏩️ Using Jupyter notebooks in Sphinx
There are a few extensions that allow integrating Jupyter and Sphinx, and this document will explain how to achieve some of the most commonly requested features.
- ⏩️ Migrating from rST to MyST
In this guide, you will find how you can start writing Markdown in your existing reStructuredText project, or migrate it completely.
- ⏩️ Enabling offline formats
This guide provides step-by-step instructions to enabling offline formats of your documentation.
- ⏩️ Using search analytics
In this guide, you can learn to use Read the Docs’ built-in search analytics for your documentation project.
- ⏩️ Adding custom CSS or JavaScript to Sphinx documentation
Adding additional CSS or JavaScript files to your Sphinx documentation can let you customize the look and feel of your docs or add additional functionality.
- ⏩️ Embedding content from your documentation
Did you know that Read the Docs has a public API that you can use to embed documentation content? There are a number of use cases for embedding content, so we’ve built our integration in a way that enables users to build on top of it.
- ⏩️ Removing “Edit on …” buttons from documentation
When building your documentation, Read the Docs automatically adds buttons at the top of your documentation and in the versions menu that point readers to your repository to make changes. Here’s how to remove it.
- ⏩️ Adding “Edit Source” links on your Sphinx theme
Using your own theme? Read the Docs injects some extra variables in the Sphinx
html_context
, some of which you can use to add an “edit source” link at the top of all pages.
How to do search engine optimization (SEO) for documentation projects
This article explains how documentation can be optimized to appear in search results, ultimately increasing traffic to your docs.
While you optimize your docs to make them more friendly for search engine spiders/crawlers, it’s important to keep in mind that your ultimate goal is to make your docs more discoverable for your users.
By following our best practices for SEO, you can ensure that when a user types a question into a search engine, they can get the answers from your documentation in the search results.
See also
This guide isn’t meant to be your only resource on SEO, and there’s a lot of SEO topics not covered here. For additional reading, please see the external resources section.
SEO basics
Search engines like Google and Bing crawl through the internet following links in an attempt to understand and build an index of what various pages and sites are about. This is called “crawling” or “indexing”. When a person sends a query to a search engine, the search engine evaluates this index using a number of factors and attempts to return the results most likely to answer that person’s question.
How search engines “rank” sites based on a person’s query is part of their secret sauce. While some search engines publish the basics of their algorithms (see Google’s published details on PageRank), few search engines give all of the details in an attempt to prevent users from gaming the rankings with low value content which happens to rank well.
Both Google and Bing publish a set of guidelines to help make sites easier to understand for search engines and rank better. To summarize some of the most important aspects as they apply to technical documentation, your site should:
Use descriptive and accurate titles in the HTML
<title>
tag. For Sphinx, the<title>
comes from the first heading on the page.Ensure your URLs are descriptive. They are displayed in search results. Sphinx uses the source filename without the file extension as the URL.
Make sure the words your readers would search for to find your site are actually included on your pages.
Avoid low content pages or pages with very little original content.
Avoid tactics that attempt to increase your search engine ranking without actually improving content.
Google specifically warns about automatically generated content although this applies primarily to keyword stuffing and low value content. High quality documentation generated from source code (eg. auto generated API documentation) seems OK.
While both Google and Bing discuss site performance as an important factor in search result ranking, this guide is not going to discuss it in detail. Most technical documentation that uses Sphinx or Read the Docs generates static HTML and the performance is typically decent relative to most of the internet.
Best practices for documentation SEO
Once a crawler or spider finds your site, it will follow links and redirects in an attempt to find any and all pages on your site. While there are a few ways to guide the search engine in its crawl for example by using a sitemap or a robots.txt file which we’ll discuss shortly, the most important thing is making sure the spider can follow links on your site and get to all your pages.
Avoid unlinked pages ✅️
When building your documentation, you should ensure that pages aren’t unlinked, meaning that no other pages or navigation have a link to them.
Search engine crawlers will not discover pages that aren’t linked from somewhere else on your site.
Sphinx calls pages that don’t have links to them “orphans” and will throw a warning while building documentation that contains an orphan unless the warning is silenced with the orphan directive.
We recommend failing your builds whenever Sphinx warns you,
using the fail_on_warnings
option in .readthedocs.yaml.
Here is an example of a warning of an unreferenced page:
$ make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v1.8.5
...
checking consistency... /path/to/file.rst: WARNING: document isn't included in any toctree
done
...
build finished with problems, 1 warning.
MkDocs automatically includes all .md
files in the main navigation 💯️.
This makes sure that all files are discoverable by default,
however there are configurations that allow for unlinked files in various ways.
If you want to scan your documentation for unreferenced files and images,
a plugin like mkdocs-unused-files does the job.
Avoid uncrawlable content ✅️
While typically this isn’t a problem with technical documentation, try to avoid content that is “hidden” from search engines. This includes content hidden in images or videos which the crawler may not understand. For example, if you do have a video in your docs, make sure the rest of that page describes the content of the video.
When using images, make sure to set the image alt text or set a caption on figures.
For Sphinx, the image and figure directives support both alt texts and captions:
.. image:: your-image.png
:alt: A description of this image
.. figure:: your-image.png
A caption for this figure
The Markdown syntax defines an alt text for images:
{ width="300" }
Though HTML supports figures and captions, Markdown and MkDocs do not have a built-in feature. Instead, you can use markdown extensions such as md-in-html to allow the necessary HTML structures for including figures:
<figure markdown>
{ width="300" }
<figcaption>Image caption</figcaption>
</figure>
Redirects ✅️
Redirects tell search engines when content has moved.
For example, if this guide moved from guides/technical-docs-seo-guide.html
to guides/sphinx-seo-guide.html
,
there will be a time period where search engines will still have the old URL in their index
and will still be showing it to users.
This is why it is important to update your own links within your docs as well as redirecting.
If the hostname moved from docs.readthedocs.io to docs.readthedocs.org, this would be even more important!
Read the Docs supports a few different kinds of user defined redirects that should cover all the different cases such as redirecting a certain page for all project versions, or redirecting one version to another.
Canonical URLs ✅️
Anytime very similar content is hosted at multiple URLs, it is pretty important to set a canonical URL. The canonical URL tells search engines where the original version your documentation is even if you have multiple versions on the internet (for example, incomplete translations or deprecated versions).
Read the Docs supports setting the canonical URL if you are using a custom domain under Admin > Domains in the Read the Docs dashboard.
Use a robots.txt file ✅️
A robots.txt
file is readable by crawlers
and lives at the root of your site (eg. https://docs.readthedocs.io/robots.txt).
It tells search engines which pages to crawl or not to crawl
and can allow you to control how a search engine crawls your site.
For example, you may want to request that search engines
ignore unsupported versions of your documentation
while keeping those docs online in case people need them.
By default, Read the Docs serves a robots.txt
for you.
To customize this file, you can create a robots.txt
file
that is written to your documentation root on your default branch/version.
See the Google’s documentation on robots.txt for additional details.
Use a sitemap.xml file ✅️
A sitemap is a file readable by crawlers that contains a list of pages and other files on your site and some metadata or relationships about them (eg. https://docs.readthedocs.io/sitemap.xml). A good sitemaps provides information like how frequently a page or file is updated or any alternate language versions of a page.
Read the Docs generates a sitemap for you that contains the last time your documentation was updated as well as links to active versions, subprojects, and translations your project has. We have a small separate guide on sitemaps.
See the Google docs on building a sitemap.
Measure, iterate, & improve
Search engines (and soon, Read the Docs itself) can provide useful data that you can use to improve your docs’ ranking on search engines.
Search engine feedback
Google Search Console and Bing Webmaster Tools are tools for webmasters to get feedback about the crawling of their sites (or docs in our case). Some of the most valuable feedback these provide include:
Google and Bing will show pages that were previously indexed that now give a 404 (or more rarely a 500 or other status code). These will remain in the index for some time but will eventually be removed. This is a good opportunity to create a redirect.
These tools will show any crawl issues with your documentation.
Search Console and Webmaster Tools will highlight security issues found or if Google or Bing took action against your site because they believe it is spammy.
Analytics tools
A tool like Google Analytics can give you feedback about the search terms people use to find your docs, your most popular pages, and lots of other useful data.
Search term feedback can be used to help you optimize content for certain keywords or for related keywords. For Sphinx documentation, or other technical documentation that has its own search features, analytics tools can also tell you the terms people search for within your site.
Knowing your popular pages can help you prioritize where to spend your SEO efforts. Optimizing your already popular pages can have a significant impact.
External resources
Here are a few additional resources to help you learn more about SEO and rank better with search engines.
How to use traffic analytics
In this guide, you can learn to use Read the Docs’ built-in traffic analytics for your documentation project. You will also learn how to optionally add your own Google Analytics account or completely disable Google Analytics on your project.
Traffic Analytics lets you see which documents your users are reading. This allows you to understand how your documentation is being used, so you can focus on expanding and updating parts people are reading most.
To see a list of the top pages from the last month, go to the Admin tab of your project, and then click on Traffic Analytics.

Traffic analytics demo
You can also access analytics data from search results.
Note
The amount of analytics data stored for download depends which site you’re using:
On the Community site, the last 90 days are stored.
- On the Commercial one, it goes from 30 to infinite storage
(check out the pricing page).
Enabling Google Analytics on your project
Read the Docs has native support for Google Analytics. You can enable it by:
Going to Admin > Settings in your project.
Fill in the Analytics code heading with your Google Tracking ID (example
UA-123456674-1
)

Options to manage Google Analytics
Once your documentation rebuilds it will include your Analytics tracking code and start sending data. Google Analytics usually takes 60 minutes, and sometimes can take up to a day before it starts reporting data.
Note
Read the Docs takes some extra precautions with analytics to protect user privacy. As a result, users with Do Not Track enabled will not be counted for the purpose of analytics.
For more details, see the Do Not Track section of our privacy policy.
Disabling Google Analytics on your project
Google Analytics can be completely disabled on your own projects. To disable Google Analytics:
Going to Admin > Settings in your project.
Check the box Disable Analytics.
Your documentation will need to be rebuilt for this change to take effect.
How to use search analytics
In this guide, you can learn to use Read the Docs’ built-in search analytics for your documentation project.
To see a list of the top queries and an overview from the last month, go to the Admin tab of your project, and then click on Search Analytics.

How the search analytics page looks.
In Top queries in the past 30 days, you see all the latest searches ordered by their popularity. The list itself is often longer than what meets the eye, Scroll downwards on the list itself to see more results.
Understanding your analytics
In Top queries in the past 30 days, you can see the most popular terms that users have searched for. Next to the search query, the number of actual results for that query is shown. The number of times the query has been used in a search is displayed as the searches number.
If you see a search term that doesn’t have any results, you could apply that term in documentation articles or create new ones. This is a great way to understand missing gaps in your documentation.
If a search term is often used but the documentation article exists, it can also indicate that it’s hard to navigate to the article.
Repeat the search yourself and inspect the results to see if they are relevant. You can add keywords to various pages that you want to show up for searches on that page.
In Daily search totals, you can see trends that might match special events in your project’s publicity. If you wish to analyze these numbers in details, click Download all data to get a CSV formatted file with all available search analytics.
How to enable canonical URLs
In this guide, we introduce relevant settings for enabling canonical URLs in popular documentation frameworks.
If you need to customize the domain from which your documentation project is served, please refer to How to manage custom domains.
Sphinx
If you are using Sphinx, Read the Docs will automatically add a default value of the html_baseurl setting matching your canonical domain.
If you are using a custom html_baseurl
in your conf.py
,
you have to ensure that the value is correct.
This can be complex,
supporting pull request builds (which are published on a separate domain),
special branches
or if you are using subproject s or translations.
We recommend not including a html_baseurl
in your conf.py
,
and letting Read the Docs define it.
MkDocs
For MkDocs we do not define your canonical domain automatically, but you can use the site_url setting to set a similar value.
In your mkdocs.yml
, define the following:
# Canonical URL, adjust as need with respect to your slug, language,
# default branch and if you use a custom domain.
site_url: https://<slug>.readthedocs.io/en/stable/
Note that this will define the same canonical URL for all your branches and versions. According to MkDocs, defining site_url will only define the canonical URL of a website and does not affect the base URL of generated links, CSS, or Javascript files.
Note
2 known issues are currently making it impossible to use environment variables in MkDocs configuration. Once these issues are solved, it will be easier.
Warning
If you change your default version or canonical domain, you’ll need to re-build all your versions in order to update their canonical URL to the new one.
How to enable offline formats
This guide provides step-by-step instructions to enabling offline formats of your documentation.
They are automatically built by Read the Docs during our default build process, as long as you have the configuration enabled to turn this on.
Enabling offline formats
Offline formats are enabled by the formats key in our config file. A simple example is here:
# Build PDF & ePub
formats:
- epub
- pdf
Verifying offline formats
You can verify that offline formats are building in your Project dashboard > Downloads:

Deleting offline formats
The entries in the Downloads section of your project dashboard reflect the formats specified in your config file for each active version.
This means that if you wish to remove downloadable content for a given version, you can do so by removing the matching formats key from your config file.
Continue learning
See also
Other pages in our documentation are relevant to this feature, and might be a useful next step.
Offline formats (PDF, ePub, HTML) - Overview of this feature.
formats - Configuration file options for offline formats.
How to manage translations for Sphinx projects
This guide walks through the process needed to manage translations of your documentation. Once this work is done, you can setup your project under Read the Docs to build each language of your documentation by reading Localization and Internationalization.
Overview
There are many different ways to manage documentation in multiple languages by using different tools or services. All of them have their pros and cons depending on the context of your project or organization.
In this guide we will focus our efforts around two different methods: manual and using Transifex.
In both methods, we need to follow these steps to translate our documentation:
Create translatable files (
.pot
and.po
extensions) from source languageTranslate the text on those files from source language to target language
Build the documentation in target language using the translated texts
Besides these steps, once we have published our first translated version of our documentation, we will want to keep it updated from the source language. At that time, the workflow would be:
Update our translatable files from source language
Translate only new and modified texts in source language into target language
Build the documentation using the most up to date translations
Create translatable files
To generate these .pot
files it’s needed to run this command from your docs/
directory:
sphinx-build -b gettext . _build/gettext
Tip
We recommend configuring Sphinx to use gettext_uuid as True
and also gettext_compact as False
to generate .pot
files.
This command will leave the generated files under _build/gettext
.
Translate text from source language
Manually
We recommend using sphinx-intl tool for this workflow.
First, you need to install it:
pip install sphinx-intl
As a second step, we want to create a directory with each translated file per target language (in this example we are using Spanish/Argentina and Portuguese/Brazil). This can be achieved with the following command:
sphinx-intl update -p _build/gettext -l es_AR -l pt_BR
This command will create a directory structure similar to the following
(with one .po
file per .rst
file in your documentation):
locale
├── es_AR
│ └── LC_MESSAGES
│ └── index.po
└── pt_BR
└── LC_MESSAGES
└── index.po
Now, you can just open those .po
files with a text editor and translate them taking care of no breaking the reST notation.
Example:
# b8f891b8443f4a45994c9c0a6bec14c3
#: ../../index.rst:4
msgid ""
"Read the Docs hosts documentation for the open source community."
"It supports :ref:`Sphinx <sphinx>` docs written with reStructuredText."
msgstr ""
"FILL HERE BY TARGET LANGUAGE FILL HERE BY TARGET LANGUAGE FILL HERE "
"BY TARGET LANGUAGE :ref:`Sphinx <sphinx>` FILL HERE."
Using Transifex
Transifex is a platform that simplifies the manipulation of .po
files and offers many useful features to make the translation process as smooth as possible.
These features includes a great web based UI, Translation Memory, collaborative translation, etc.
You need to create an account in their service and a new project before start.
After that, you need to install the Transifex CLI tool which will help you in the process to upload source files, update them and also download translated files. To do this, run this command:
curl -o- https://raw.githubusercontent.com/transifex/cli/master/install.sh | bash
After installing it, you need to configure your account. For this, you need to create an API Token for your user to access this service through the command line. This can be done under your User’s Settings.
With the token, you have two options: to export as TX_TOKEN
environment variable or to store it in ~/.transifexrc
.
You can export the token to an environment variable, using an export
command, which activates it in your current command line session:
# ``1/xxxx`` is the API token you generated
export TX_TOKEN=1/xxxx
In order to store the token permanently, you can save it in a ~/.transifexrc
file. It should look like this:
[https://www.transifex.com]
rest_hostname = https://rest.api.transifex.com
token = 1/xxxx
Now, it is time to set the project’s Transifex configuration and to map every .pot
file you have created in the previous step to a resource under Transifex.
To achieve this, you need to run this command:
sphinx-intl create-txconfig
sphinx-intl update-txconfig-resources \
--pot-dir _build/gettext \
--locale-dir locale \
--transifex-organization-name $TRANSIFEX_ORGANIZATION \
--transifex-project-name $TRANSIFEX_PROJECT
This command will generate a file at .tx/config
with all the information needed by the tx
tool to keep your translation synchronized.
Finally, you need to upload these files to Transifex platform so translators can start their work. To do this, you can run this command:
tx push --source
Now, you can go to your Transifex’s project and check that there is one resource per .rst
file of your documentation.
After the source files are translated using Transifex, you can download all the translations for all the languages by running:
tx pull --all
This command will leave the .po
files needed for building the documentation in the target language under locale/<lang>/LC_MESSAGES
.
Warning
It’s important to use always the same method to translate the documentation and do not mix them. Otherwise, it’s very easy to end up with inconsistent translations or losing already translated text.
Build the documentation in target language
Finally, to build our documentation in Spanish(Argentina) we need to tell Sphinx builder the target language with the following command:
sphinx-build -b html -D language=es_AR . _build/html/es_AR
Note
There is no need to create a new conf.py
to redefine the language
for the Spanish version of this documentation,
but you need to set locale_dirs to ["locale"]
for Sphinx to find the translated content.
After running this command, the Spanish(Argentina) version of your documentation will be under _build/html/es_AR
.
Summary
Update sources to be translated
Once you have done changes in your documentation, you may want to make these additions/modifications available for translators so they can update it:
Create the
.pot
files:sphinx-build -b gettext . _build/gettext
Push new files to Transifex
tx push --sources
Build documentation from up to date translation
When translators have finished their job, you may want to update the documentation by pulling the changes from Transifex:
Pull up to date translations from Transifex:
tx pull --all
Commit and push these changes to our repo
git add locale/ git commit -m "Update translations" git push
The last git push
will trigger a build per translation defined as part of your project under Read the Docs and make it immediately available.
How to support Unicode in Sphinx PDFs
Sphinx offers different LaTeX engines that have better support for Unicode characters, relevant for instance for Japanese or Chinese.
To build your documentation in PDF format, you need to configure Sphinx properly in your project’s conf.py
.
Read the Docs will execute the proper commands depending on these settings.
There are several settings that can be defined (all the ones starting with latex_
),
to modify Sphinx and Read the Docs behavior to make your documentation to build properly.
For docs that are not written in Chinese or Japanese,
and if your build fails from a Unicode error,
then try xelatex
as the latex_engine
in your conf.py
:
latex_engine = "xelatex"
When Read the Docs detects that your documentation is in Chinese or Japanese, it automatically adds some defaults for you.
For Chinese projects, it appends to your conf.py
these settings:
latex_engine = "xelatex"
latex_use_xindy = False
latex_elements = {
"preamble": "\\usepackage[UTF8]{ctex}\n",
}
And for Japanese projects:
latex_engine = "platex"
latex_use_xindy = False
Tip
You can always override these settings if you define them by yourself in your conf.py
file.
Note
xindy
is currently not supported by Read the Docs,
but we plan to support it in the near future.
How to use cross-references with Sphinx
When writing documentation you often need to link to other pages of your documentation, other sections of the current page, or sections from other pages.
An easy way is just to use the raw URL that Sphinx generates for each page/section. This works, but it has some disadvantages:
Links can change, so they are hard to maintain.
Links can be verbose and hard to read, so it is unclear what page/section they are linking to.
There is no easy way to link to specific sections like paragraphs, figures, or code blocks.
URL links only work for the html version of your documentation.
Instead, Sphinx offers a powerful way to linking to the different elements of the document, called cross-references. Some advantages of using them:
Use a human-readable name of your choice, instead of a URL.
Portable between formats: html, PDF, ePub.
Sphinx will warn you of invalid references.
You can cross reference more than just pages and section headers.
This page describes some best-practices for cross-referencing with Sphinx with two markup options: reStructuredText and MyST (Markdown).
If you are not familiar with reStructuredText, check reStructuredText Primer for a quick introduction.
If you want to learn more about the MyST Markdown dialect, check out Syntax tokens.
Getting started
Explicit targets
Cross referencing in Sphinx uses two components, references and targets.
references are pointers in your documentation to other parts of your documentation.
targets are where the references can point to.
You can manually create a target in any location of your documentation, allowing you to reference it from other pages. These are called explicit targets.
For example, one way of creating an explicit target for a section is:
.. _My target:
Explicit targets
~~~~~~~~~~~~~~~~
Reference `My target`_.
(My_target)=
## Explicit targets
Reference [](My_target).
Then the reference will be rendered as My target.
You can also add explicit targets before paragraphs (or any other part of a page).
Another example, add a target to a paragraph:
.. _target to paragraph:
An easy way is just to use the final link of the page/section.
This works, but it has :ref:`some disadvantages <target to paragraph>`:
(target_to_paragraph)=
An easy way is just to use the final link of the page/section.
This works, but it has [some disadvantages](target_to_paragraph):
Then the reference will be rendered as: some disadvantages.
You can also create in-line targets within an element on your page, allowing you to, for example, reference text within a paragraph.
For example, an in-line target inside a paragraph:
You can also create _`in-line targets` within an element on your page,
allowing you to, for example, reference text *within* a paragraph.
Then you can reference it using `in-line targets`_
,
that will be rendered as: in-line targets.
Implicit targets
You may also reference some objects by name without explicitly giving them one by using implicit targets.
When you create a section, a footnote, or a citation, Sphinx will create a target with the title as the name:
For example, to reference the previous section
you can use `Explicit targets`_.
For example, to reference the previous section
you can use [](#explicit-targets).
Note
This requires setting myst_heading_anchors = 2
in your conf.py
,
see Auto-generated header anchors.
The reference will be rendered as: Explicit targets.
Cross-referencing using roles
All targets seen so far can be referenced only from the same page. Sphinx provides some roles that allow you to reference any explicit target from any page.
Note
Since Sphinx will make all explicit targets available globally, all targets must be unique.
You can see the complete list of cross-referencing roles at Cross-referencing syntax. Next, you will explore the most common ones.
The ref role
The ref
role can be used to reference any explicit targets. For example:
- :ref:`my target`.
- :ref:`Target to paragraph <target to paragraph>`.
- :ref:`Target inside a paragraph <in-line targets>`.
- {ref}`my target`.
- {ref}`Target to paragraph <target_to_paragraph>`.
That will be rendered as:
The ref
role also allow us to reference code blocks:
.. _target to code:
.. code-block:: python
# Add the extension
extensions = [
'sphinx.ext.autosectionlabel',
]
# Make sure the target is unique
autosectionlabel_prefix_document = True
We can reference it using :ref:`code <target to code>`
,
that will be rendered as: code.
The doc role
The doc
role allows us to link to a page instead of just a section.
The target name can be relative to the page where the role exists, or relative
to your documentation’s root folder (in both cases, you should omit the extension).
For example, to link to a page in the same directory as this one you can use:
- :doc:`intersphinx`
- :doc:`/guides/intersphinx`
- :doc:`Custom title </guides/intersphinx>`
- {doc}`intersphinx`
- {doc}`/guides/intersphinx`
- {doc}`Custom title </guides/intersphinx>`
That will be rendered as:
How to link to other documentation projects with Intersphinx
How to link to other documentation projects with Intersphinx
Tip
Using paths relative to your documentation root is recommended, so you avoid changing the target name if the page is moved.
The numref role
The numref
role is used to reference numbered elements of your documentation.
For example, tables and images.
To activate numbered references, add this to your conf.py
file:
# Enable numref
numfig = True
Next, ensure that an object you would like to reference has an explicit target.
For example, you can create a target for the next image:

Link me!
.. _target to image:
.. figure:: /img/logo.png
:alt: Logo
:align: center
:width: 240px
Link me!
(target_to_image)=
```{figure} /img/logo.png
:alt: Logo
:align: center
:width: 240px
```
Finally, reference it using :numref:`target to image`
,
that will be rendered as Fig. N
.
Sphinx will enumerate the image automatically.
Automatically label sections
Manually adding an explicit target to each section and making sure is unique is a big task! Fortunately, Sphinx includes an extension to help us with that problem, autosectionlabel.
To activate the autosectionlabel
extension, add this to your conf.py
file:
# Add the extension
extensions = [
"sphinx.ext.autosectionlabel",
]
# Make sure the target is unique
autosectionlabel_prefix_document = True
Sphinx will create explicit targets for all your sections,
the name of target has the form {path/to/page}:{title-of-section}
.
For example, you can reference the previous section using:
- :ref:`guides/cross-referencing-with-sphinx:explicit targets`.
- :ref:`Custom title <guides/cross-referencing-with-sphinx:explicit targets>`.
- {ref}`guides/cross-referencing-with-sphinx:explicit targets`.
- {ref}`Custom title <guides/cross-referencing-with-sphinx:explicit targets>`.
That will be rendered as:
Invalid targets
If you reference an invalid or undefined target Sphinx will warn us.
You can use the -W
option when building your docs
to fail the build if there are any invalid references.
On Read the Docs you can use the sphinx.fail_on_warning option.
Finding the reference name
When you build your documentation, Sphinx will generate an inventory of all
explicit and implicit links called objects.inv
. You can list all of these targets to
explore what is available for you to reference.
List all targets for built documentation with:
python -m sphinx.ext.intersphinx <link>
Where <link>
is either a URL or a local path that points to your inventory file
(usually in _build/html/objects.inv
).
For example, to see all targets from the Read the Docs documentation:
python -m sphinx.ext.intersphinx https://docs.readthedocs.io/en/stable/objects.inv
Cross-referencing targets in other documentation sites
You can reference to docs outside your project too! See How to link to other documentation projects with Intersphinx.
How to link to other documentation projects with Intersphinx
This section shows you how to maintain references to named sections of other external Sphinx projects.
You may be familiar with using the :ref: role to link to any location of your docs. It helps you to keep all links within your docs up to date and warns you if a reference target moves or changes so you can ensure that your docs don’t have broken cross-references.
Sometimes you may need to link to a specific section of another project’s documentation.
While you could just hyperlink directly, there is a better way.
Intersphinx allows you to use all cross-reference roles from Sphinx with objects in other projects.
That is, you could use the :ref:
role to link to sections of other documentation projects.
Sphinx will ensure that your cross-references to the other project exist and will raise a warning if they are deleted or changed so you can keep your docs up to date.
If you are publishing several Sphinx projects together using Read the Docs’ subprojects (see Subprojects), you should use Intersphinx to reference your subprojects from other projects.
Note
You can also use Sphinx’s linkcheck
builder to check for broken links.
By default it will also check the validity of #anchors
in links.
sphinx-build -b linkcheck . _build/linkcheck
See all the options for the linkcheck builder.
Using Intersphinx
To use Intersphinx you need to add it to the list of extensions in your conf.py
file.
# conf.py file
extensions = [
"sphinx.ext.intersphinx",
]
And use the intersphinx_mapping
configuration to indicate the name and link of the projects you want to use.
# conf.py file
intersphinx_mapping = {
"sphinx": ("https://www.sphinx-doc.org/en/master/", None),
}
# We recommend adding the following config value.
# Sphinx defaults to automatically resolve *unresolved* labels using all your Intersphinx mappings.
# This behavior has unintended side-effects, namely that documentations local references can
# suddenly resolve to an external location.
# See also:
# https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html#confval-intersphinx_disabled_reftypes
intersphinx_disabled_reftypes = ["*"]
Note
If you are using Read the Docs’ subprojects, you also need to enable the Intersphinx extension on each of the subprojects.
For each subproject, you need to add the main project and all the other subprojects to intersphinx_mapping
.
Now you can use the sphinx
name with a cross-reference role:
- :ref:`sphinx:ref-role`
- :ref:`:ref: role <sphinx:ref-role>`
- :doc:`sphinx:usage/extensions/intersphinx`
- :doc:`Intersphinx <sphinx:usage/extensions/intersphinx>`
- {ref}`sphinx:ref-role`
- {ref}`:ref: role <sphinx:ref-role>`
- {doc}`sphinx:usage/extensions/intersphinx`
- {doc}`Intersphinx <sphinx:usage/extensions/intersphinx>`
Result:
Note
You can get the targets used in Intersphinx by inspecting the source file of the project or using this utility provided by Intersphinx:
python -m sphinx.ext.intersphinx https://www.sphinx-doc.org/en/master/objects.inv
Intersphinx in Read the Docs
You can use Intersphinx to link to subprojects, translations, another version or any other project hosted in Read the Docs. For example:
# conf.py file
intersphinx_mapping = {
# Links to "v2" version of the "docs" project.
"docs-v2": ("https://docs.readthedocs.io/en/v2", None),
# Links to the French translation of the "docs" project.
"docs-fr": ("https://docs.readthedocs.io/fr/latest", None),
# Links to the "apis" subproject of the "docs" project.
"sub-apis": ("https://docs.readthedocs.io/projects/apis/en/latest", None),
}
Intersphinx with private projects
If you are using Business hosting, Intersphinx will not be able to fetch the inventory file from private docs.
Intersphinx supports URLs with Basic Authorization, which Read the Docs supports using a token. You need to generate a token for each project you want to use with Intersphinx.
Go the project you want to use with Intersphinx
Click Admin > Sharing
Select
HTTP Header Token
Set an expiration date long enough to use the token when building your project
Click on
Share!
.
Now we can add the link to the private project with the token like:
# conf.py file
intersphinx_mapping = {
# Links to a private project named "docs"
"docs": (
"https://<token-for-docs>:@readthedocs-docs.readthedocs-hosted.com/en/latest",
None,
),
# Links to the private French translation of the "docs" project
"docs": (
"https://<token-for-fr-translation>:@readthedocs-docs.readthedocs-hosted.com/fr/latest",
None,
),
# Links to the private "apis" subproject of the "docs" project
"docs": (
"https://<token-for-apis>:@readthedocs-docs.readthedocs-hosted.com/projects/apis/en/latest",
None,
),
}
Note
Sphinx will strip the token from the URLs when generating the links.
You can use your tokens with environment variables,
so you don’t have to hard code them in your conf.py
file.
See Environment variable overview to use environment variables inside Read the Docs.
For example,
if you create an environment variable named RTD_TOKEN_DOCS
with the token from the “docs” project.
You can use it like this:
# conf.py file
import os
RTD_TOKEN_DOCS = os.environ.get("RTD_TOKEN_DOCS")
intersphinx_mapping = {
# Links to a private project named "docs"
"docs": (
f"https://{RTD_TOKEN_DOCS}:@readthedocs-docs.readthedocs-hosted.com/en/latest",
None,
),
}
Note
Another way of using Intersphinx with private projects is to download the inventory file and keep it in sync when the project changes.
The inventory file is by default located at objects.inv
, for example https://readthedocs-docs.readthedocs-hosted.com/en/latest/objects.inv
.
# conf.py file
intersphinx_mapping = {
# Links to a private project named "docs" using a local inventory file.
"docs": (
"https://readthedocs-docs.readthedocs-hosted.com/en/latest",
"path/to/local/objects.inv",
),
}
How to use Jupyter notebooks in Sphinx
Jupyter notebooks are a popular tool to describe computational narratives that mix code, prose, images, interactive components, and more. Embedding them in your Sphinx project allows using these rich documents as documentation, which can provide a great experience for tutorials, examples, and other types of technical content. There are a few extensions that allow integrating Jupyter and Sphinx, and this document will explain how to achieve some of the most commonly requested features.
Including classic .ipynb
notebooks in Sphinx documentation
There are two main extensions that add support Jupyter notebooks as source files in Sphinx:
nbsphinx and MyST-NB. They have similar intent and basic functionality:
both can read notebooks in .ipynb
and additional formats supported by jupytext,
and are configured in a similar way
(see Existing relevant extensions for more background on their differences).
First of all, create a Jupyter notebook using the editor of your liking (for example, JupyterLab).
For example, source/notebooks/Example 1.ipynb
:

Example Jupyter notebook created on JupyterLab
Next, you will need to enable one of the extensions, as follows:
Finally, you can include the notebook in any toctree. For example, add this to your root document:
.. toctree::
:maxdepth: 2
:caption: Contents:
notebooks/Example 1
```{toctree}
---
maxdepth: 2
caption: Contents:
---
notebooks/Example 1
```
The notebook will render as any other HTML page in your documentation
after doing make html
.

Example Jupyter notebook rendered on HTML by nbsphinx
To further customize the rendering process among other things, refer to the nbsphinx or MyST-NB documentation.
Rendering interactive widgets
Widgets are eventful python objects that have a representation in the browser and that you can use to build interactive GUIs for your notebooks. Basic widgets using ipywidgets include controls like sliders, textboxes, and buttons, and more complex widgets include interactive maps, like the ones provided by ipyleaflet.
You can embed these interactive widgets on HTML Sphinx documentation. For this to work, it’s necessary to save the widget state before generating the HTML documentation, otherwise the widget will appear as empty. Each editor has a different way of doing it:
The classical Jupyter Notebook interface provides a “Save Notebook Widget State” action in the “Widgets” menu, as explained in the ipywidgets documentation. You need to click it before exporting your notebook to HTML.
JupyterLab provides a “Save Widget State Automatically” option in the “Settings” menu. You need to leave it checked so that widget state is automatically saved.
In Visual Studio Code it’s not possible to save the widget state at the time of writing (June 2021).

JupyterLab option to save the interactive widget state automatically
For example, if you create a notebook with a simple IntSlider widget from ipywidgets and save the widget state, the slider will render correctly in Sphinx.

Interactive widget rendered in HTML by Sphinx
To see more elaborate examples:
ipyleaflet provides several widgets for interactive maps, and renders live versions of them in their documentation.
PyVista is used for scientific 3D visualization with several interactive backends and examples in their documentation as well.
Warning
Although widgets themselves can be embedded in HTML,
events
require a backend (kernel) to execute.
Therefore, @interact
, .observe
, and related functionalities relying on them
will not work as expected.
Note
If your widgets need some additional JavaScript libraries,
you can add them using add_js_file()
.
Using notebooks in other formats
For example, this is how a simple notebook looks like in MyST Markdown format:
---
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.10.3
kernelspec:
display_name: Python 3
language: python
name: python3
---
# Plain-text notebook formats
This is a example of a Jupyter notebook stored in MyST Markdown format.
```{code-cell} ipython3
import sys
print(sys.version)
```
```{code-cell} ipython3
from IPython.display import Image
```
```{code-cell} ipython3
Image("http://sipi.usc.edu/database/preview/misc/4.2.03.png")
```
To render this notebook in Sphinx
you will need to add this to your conf.py
:
nbsphinx_custom_formats = {
".md": ["jupytext.reads", {"fmt": "mystnb"}],
}
nb_custom_formats = {
".md": ["jupytext.reads", {"fmt": "mystnb"}],
}
Notice that the Markdown format does not store the outputs of the computation. Sphinx will automatically execute notebooks without outputs, so in your HTML documentation they appear as complete.
Creating galleries of examples using notebooks
nbsphinx has support for creating thumbnail galleries from a list of Jupyter notebooks. This functionality relies on Sphinx-Gallery and extends it to work with Jupyter notebooks rather than Python scripts.
To use it, you will need to install both nbsphinx and Sphinx-Gallery,
and modify your conf.py
as follows:
extensions = [
"nbsphinx",
"sphinx_gallery.load_style",
]
After doing that, there are two ways to create the gallery:
From a reStructuredText source file, using the
.. nbgallery::
directive, as showcased in the documentation.From a Jupyter notebook, adding a
"nbsphinx-gallery"
tag to the metadata of a cell. Each editor has a different way of modifying the cell metadata (see figure below).

Panel to modify cell metadata in JupyterLab
For example, this reST markup would create a thumbnail gallery with generic images as thumbnails, thanks to the Sphinx-Gallery default style:
Thumbnails gallery
==================
.. nbgallery::
notebooks/Example 1
notebooks/Example 2
# Thumbnails gallery
```{nbgallery}
notebooks/Example 1
notebooks/Example 2
```

Simple thumbnail gallery created using nbsphinx
To see some examples of notebook galleries in the wild:
poliastro offers tools for interactive Astrodynamics in Python, and features several examples and how-to guides using notebooks and displays them in an appealing thumbnail gallery. In addition, poliastro uses unpaired MyST Notebooks to reduce repository size and improve integration with git.
Background
Existing relevant extensions
In the first part of this document we have seen that nbsphinx and MyST-NB are similar. However, there are some differences between them:
nsphinx uses pandoc to convert the Markdown from Jupyter notebooks to reStructuredText and then to docutils AST, whereas MyST-NB uses MyST-Parser to directly convert the Markdown text to docutils AST. Therefore, nbsphinx assumes pandoc flavored Markdown, whereas MyST-NB uses MyST flavored Markdown. Both Markdown flavors are mostly equal, but they have some differences.
nbsphinx executes each notebook during the parsing phase, whereas MyST-NB can execute all notebooks up front and cache them with jupyter-cache. This can result in shorter build times when notebooks are modified if using MyST-NB.
nbsphinx provides functionality to create thumbnail galleries, whereas MyST-NB does not have such functionality at the moment (see Creating galleries of examples using notebooks for more information about galleries).
MyST-NB allows embedding Python objects coming from the notebook in the documentation (read their “glue” documentation for more information) and provides more sophisticated error reporting than the one nbsphinx has.
The visual appearance of code cells and their outputs is slightly different: nbsphinx renders the cell numbers by default, whereas MyST-NB doesn’t.
Deciding which one to use depends on your use case. As general recommendations:
If you want to use other notebook formats or generate a thumbnail gallery from your notebooks, nbsphinx is the right choice.
If you want to leverage a more optimized execution workflow and a more streamlined parsing mechanism, as well as some of the unique MyST-NB functionalities, you should use MyST-NB.
Alternative notebook formats
Jupyter notebooks in .ipynb
format
(as described in the nbformat
documentation)
are by far the most widely used for historical reasons.
However, to compensate some of the disadvantages of the .ipynb
format
(like cumbersome integration with version control systems),
jupytext offers other formats
based on plain text rather than JSON.
As a result, there are three modes of operation:
Using classic
.ipynb
notebooks. It’s the most straightforward option, since all the tooling is prepared to work with them, and does not require additional pieces of software. It is therefore simpler to manage, since there are fewer moving parts. However, it requires some care when working with Version Control Systems (like git), by doing one of these things:Clear outputs before commit. Minimizes conflicts, but might defeat the purpose of notebooks themselves, since the computation results are not stored.
Use tools like nbdime (open source) or ReviewNB (proprietary) to improve the review process.
Use a different collaboration workflow that doesn’t involve notebooks.
Replace
.ipynb
notebooks with a text-based format. These formats behave better under version control and they can also be edited with normal text editors that do not support cell-based JSON notebooks. However, text-based formats do not store the outputs of the cells, and this might not be what you want.Pairing
.ipynb
notebooks with a text-based format, and putting the text-based file in version control, as suggested in the jupytext documentation. This solution has the best of both worlds. In some rare cases you might experience synchronization issues between both files.
These approaches are not mutually exclusive, nor you have to use a single format for all your notebooks. For the examples in this document, we have used the MyST Markdown format.
If you are using alternative formats for Jupyter notebooks, you can include them in your Sphinx documentation using either nbsphinx or MyST-NB (see Existing relevant extensions for more information about the differences between them).
How to migrate from reStructuredText to MyST Markdown
In this guide, you will find how you can start writing Markdown in your existing reStructuredText project, or migrate it completely.
Sphinx is usually associated with reStructuredText, the markup language designed for the CPython project in the early ’00s. However, for quite some time Sphinx has been compatible with Markdown as well, thanks to a number of extensions.
The most powerful of such extensions is MyST-Parser, which implements a CommonMark-compliant, extensible Markdown dialect with support for the Sphinx roles and directives that make it so useful.
If, instead of migrating, you are starting a new project from scratch,
have a look at 🚀 Get Started.
If you are starting a project for Jupyter, you can start with Jupyter Book, which uses MyST-Parser
, see the official Jupyter Book tutorial: Create your first book
How to write your content both in reStructuredText and MyST
It is useful to ask whether a migration is necessary in the first place. Doing bulk migrations of large projects with lots of work in progress will create conflicts for ongoing changes. On the other hand, your writers might prefer to have some files in Markdown and some others in reStructuredText, for whatever reason. Luckily, Sphinx supports reading both types of markup at the same time without problems.
To start using MyST in your existing Sphinx project,
first install the myst-parser
Python package
and then enable it on your configuration:
extensions = [
# Your existing extensions
...,
"myst_parser",
]
Your reStructuredText documents will keep rendering,
and you will be able to add MyST documents with the .md
extension
that will be processed by MyST-Parser.
As an example, this guide is written in MyST while the rest of the Read the Docs documentation is written in reStructuredText.
Note
By default, MyST-Parser registers the .md
suffix for MyST source files.
If you want to use a different suffix, you can do so by changing your
source_suffix
configuration value in conf.py
.
How to convert existing reStructuredText documentation to MyST
To convert existing reST documents to MyST, you can use
the rst2myst
CLI script shipped by RST-to-MyST.
The script supports converting the documents one by one,
or scanning a series of directories to convert them in bulk.
After installing `rst-to-myst`, you can run the script as follows:
$ rst2myst convert docs/source/index.rst # Converts index.rst to index.md
$ rst2myst convert docs/**/*.rst # Convert every .rst file under the docs directory
This will create a .md
MyST file for every .rst
source file converted.
How to modify the behaviour of rst2myst
The rst2myst
accepts several flags to modify its behavior.
All of them have sensible defaults, so you don’t have to specify them
unless you want to.
These are a few options you might find useful:
-d, --dry-run
Only verify that the script would work correctly, without actually writing any files.
-R, --replace-files
Replace the
.rst
files by their.md
equivalent, rather than writing a new.md
file next to the old.rst
one.
You can read the full list of options in the `rst2myst` documentation.
How to enable optional syntax
Some reStructuredText syntax will require you to enable certain MyST plugins.
For example, to write reST definition lists, you need to add a
myst_enable_extensions
variable to your Sphinx configuration, as follows:
myst_enable_extensions = [
"deflist",
]
You can learn more about other MyST-Parser plugins in their documentation.
How to write reStructuredText syntax within MyST
There is a small chance that rst2myst
does not properly understand a piece of reST syntax,
either because there is a bug in the tool
or because that syntax does not have a MyST equivalent yet.
For example, as explained in the documentation,
the sphinx.ext.autodoc
extension is incompatible with MyST.
Fortunately, MyST supports an eval-rst
directive
that will parse the content as reStructuredText, rather than MyST.
For example:
```{eval-rst}
.. note::
Complete MyST migration.
```
will produce the following result:
Note
Complete MyST migration.
As a result, this allows you to conduct a gradual migration, at the expense of having heterogeneous syntax in your source files. In any case, the HTML output will be the same.
How to add custom CSS or JavaScript to Sphinx documentation
Adding additional CSS or JavaScript files to your Sphinx documentation can let you customize the look and feel of your docs or add additional functionality. For example, with a small snippet of CSS, your documentation could use a custom font or have a different background color.
If your custom stylesheet is _static/css/custom.css
,
you can add that CSS file to the documentation using the
Sphinx option html_css_files:
## conf.py
# These folders are copied to the documentation's HTML output
html_static_path = ['_static']
# These paths are either relative to html_static_path
# or fully qualified paths (eg. https://...)
html_css_files = [
'css/custom.css',
]
A similar approach can be used to add JavaScript files:
html_js_files = [
'js/custom.js',
]
Note
The Sphinx HTML options html_css_files
and html_js_files
were added in Sphinx 1.8.
Unless you have a good reason to use an older version,
you are strongly encouraged to upgrade.
Sphinx is almost entirely backwards compatible.
Overriding or replacing a theme’s stylesheet
The above approach is preferred for adding additional stylesheets or JavaScript, but it is also possible to completely replace a Sphinx theme’s stylesheet with your own stylesheet.
If your replacement stylesheet exists at _static/css/yourtheme.css
,
you can replace your theme’s CSS file by setting html_style
in your conf.py
:
## conf.py
html_style = 'css/yourtheme.css'
If you only need to override a few styles on the theme, you can include the theme’s normal CSS using the CSS @import rule .
/** css/yourtheme.css **/
/* This line is theme specific - it includes the base theme CSS */
@import '../alabaster.css'; /* for Alabaster */
/*@import 'theme.css'; /* for the Read the Docs theme */
body {
/* ... */
}
See also
You can also add custom classes to your html elements. See Docutils Class and this related Sphinx footnote… for more information.
Adding “Edit Source” links on your Sphinx theme
Read the Docs injects some extra variables in the Sphinx html_context
that are used by our Sphinx theme to display “edit source” links at the top of all pages.
You can use these variables in your own Sphinx theme as well.
More information can be found on Sphinx documentation.
GitHub
If you want to integrate GitHub, these are the required variables to put into
your conf.py
:
html_context = {
"display_github": True, # Integrate GitHub
"github_user": "MyUserName", # Username
"github_repo": "MyDoc", # Repo name
"github_version": "master", # Version
"conf_py_path": "/source/", # Path in the checkout to the docs root
}
They can be used like this:
{% if display_github %}
<li><a href="https://github.com/{{ github_user }}/{{ github_repo }}
/blob/{{ github_version }}{{ conf_py_path }}{{ pagename }}.rst">
Show on GitHub</a></li>
{% endif %}
Bitbucket
If you want to integrate Bitbucket, these are the required variables to put into
your conf.py
:
html_context = {
"display_bitbucket": True, # Integrate Bitbucket
"bitbucket_user": "MyUserName", # Username
"bitbucket_repo": "MyDoc", # Repo name
"bitbucket_version": "master", # Version
"conf_py_path": "/source/", # Path in the checkout to the docs root
}
They can be used like this:
{% if display_bitbucket %}
<a href="https://bitbucket.org/{{ bitbucket_user }}/{{ bitbucket_repo }}
/src/{{ bitbucket_version}}{{ conf_py_path }}{{ pagename }}.rst'"
class="icon icon-bitbucket"> Edit on Bitbucket</a>
{% endif %}
Gitlab
If you want to integrate Gitlab, these are the required variables to put into
your conf.py
:
html_context = {
"display_gitlab": True, # Integrate Gitlab
"gitlab_user": "MyUserName", # Username
"gitlab_repo": "MyDoc", # Repo name
"gitlab_version": "master", # Version
"conf_py_path": "/source/", # Path in the checkout to the docs root
}
They can be used like this:
{% if display_gitlab %}
<a href="https://{{ gitlab_host|default("gitlab.com") }}/
{{ gitlab_user }}/{{ gitlab_repo }}/blob/{{ gitlab_version }}
{{ conf_py_path }}{{ pagename }}{{ suffix }}" class="fa fa-gitlab">
Edit on GitLab</a>
{% endif %}
Additional variables
'pagename'
- Sphinx variable representing the name of the page you’re on.
How-to guides: security and access
- ⏩️ Single Sign-On (SSO) with GitHub, GitLab, or Bitbucket
When using an organization on Read the Docs for Business, you can configure SSO for your users to authenticate to Read the Docs.
- ⏩️ Single Sign-On (SSO) with Google Workspace
When using an organization on Read the Docs for Business, you can configure SSO for your users to authenticate to Read the Docs. This guide is written for Google Workspace.
- ⏩️ Managing Read the Docs teams
When using an organization on Read the Docs for Business, it’s possible to create different teams with custom access levels.
- ⏩️ Manually importing private repositories
You can grant access to private Git repositories using Read the Docs for Business using a custom process if required. Here is how you set it up.
- ⏩️ Using private Git submodules
If you are using private Git repositories and they also contain private Git submodules, you need to follow a few special steps.
- ⏩️ Installing private python packages
If you have private dependencies, you can install them from a private Git repository or a private repository manager.
How to setup Single Sign-On (SSO) with GitHub, GitLab, or Bitbucket
Note
This feature is only available on Read the Docs for Business.
This how-to guide will provide instructions on how to enable SSO with GitHub, GitLab, or Bitbucket. If you want more information on this feature, please read Single Sign-On (SSO)
Prerequisites
Organization permissions
To change your Organization’s settings, you need to be an owner of that organization.
You can validate your ownership of the Organization with these steps:
Navigate to the organization management page.
Look at the Owners section on the right menu.
If you’d like to modify this setting and are not an owner, you can ask an existing organization owner to take the actions listed.
User setup
Users in your organization must have their GitHub, Bitbucket, or GitLab account connected, otherwise they won’t have access to any project on Read the Docs after performing this change. You can read more about granting permissions on GitHub in their documentation.
Enabling SSO
You can enable this feature in your organization by:
Navigate to the authorization setting page.
Select GitHub, GitLab or Bitbucket on the Provider dropdown.
Select Save
Warning
Once you enable this option, your existing Read the Docs teams will not be used. While testing you can enable SSO and then disable it without any data loss.
Grant access to read private documentation
By granting read permissions to a user in your git repository, you are giving the user access to read the documentation of the associated project on Read the Docs. By default, private git repositories are built as private documentation websites. Having read permissions to the git repository translates to having view permissions to a private documentation website.
Grant access to administer a project
By granting admin permission to a user in the git repository, you are giving the user access to read the documentation and to be an administrator of the associated project on Read the Docs.
Grant access to import a project
When SSO with a Git provider is enabled, only owners of the Read the Docs organization can import projects.
To be able to import a project, a user must have:
admin permissions in the associated Git repository.
Ownership rights to the Read the Docs organization
Revoke access to a project
If a user should not have access to a project, you can revoke access to the git repository, and this will be automatically reflected in Read the Docs.
The same process is followed in case you need to remove admin access, but still want that user to have access to read the documentation. Instead of revoking access completely, downgrade their permissions to read only.
See also
To learn more about choosing a Single Sign-on approach, please read Single Sign-On (SSO).
How to setup Single Sign-On (SSO) with Google Workspace
Note
This feature is only available on Read the Docs for Business.
This how-to guide will provide instructions on how to enable SSO with Google Workspace. If you want more information on this feature, please read Single Sign-On (SSO)
Prerequisites
Organization permissions
To change your Organization’s settings, you need to be an owner of that organization.
You can validate your ownership of the Organization with these steps:
Navigate to the organization management page.
Look at the Owners section on the right menu.
If you’d like to modify this setting and are not an owner, you can ask an existing organization owner to take the actions listed.
Connect your Google account to Read the Docs
In order to enable the Google Workspace integration, you need to connect your Google account to Read the Docs.
The domain attached to your Google account will be used to match users that sign up with a Google account to your organization.
User setup
Using this setup, all users who have access to the configured Google Workspace will automatically join to your organization when they sign up with their Google account. Existing users will not be automatically joined to the organization.
You can still add outside collaborators and manage their access. There are two ways to manage this access:
Enabling SSO
By default, users that sign up with a Google account do not have any permissions over any project. However, you can define which teams users matching your company’s domain email address will auto-join when they sign up.
Navigate to the authorization setting page.
Select Google in the Provider drop-down.
Press Save.
After enabling SSO with Google Workspace, all users with email addresses from your configured Google Workspace domain will be required to signup using their Google account.
Warning
Existing users with email addresses from your configured Google Workspace domain will not be required to link their Google account, but they won’t be automatically joined to your organization.
Configure team for all users to join
You can mark one or many teams that users are automatically joined when they sign up with a matching email address. Configure this option by:
Navigate to the teams management page.
Click the <team name>.
Click Edit team
Enable Auto join users with an organization’s email address to this team.
Click Save
With this enabled,
all users that sign up with their employee@company.com
email will automatically join this team.
These teams can have either read-only or admin permissions over a set of projects.
Revoke user’s access to all the projects
By disabling the Google Workspace account with email employee@company.com
,
you revoke access to all the projects the linked Read the Docs user had access to,
and disable login on Read the Docs completely for that user.
Warning
If the user signed up to Read the Docs previously to enabling SSO with Google Workspace on your organization, they may still have access to their account and projects if they were manually added to a team.
To completely revoke access to a user, remove them from all the teams they are part of.
Warning
If the user was already signed in to Read the Docs when their access was revoked, they may still have access to documentation pages until their session expires. This is three days for the dashboard and documentation pages.
To completely revoke access to a user, remove them from all the teams they are part of.
See also
- How to manage Read the Docs teams
Additional user management options
- Single Sign-On (SSO)
Information about choosing a Single Sign-on approach
How to manage Read the Docs teams
Note
This feature is only available on Read the Docs for Business.
Read the Docs uses teams within an organization to group users and provide permissions to projects. This guide will cover how to do team management, including adding and removing people from teams. You can read more about organizations and teams in our Organizations documentation.
Adding a user to a team
Adding a user to a team gives them all the permissions available to that team, whether it’s read-only or admin.
Follow these steps:
Navigate to the teams management page.
Click on a <team name>.
Click Invite Member.
Input the user’s Read the Docs username or email address.
Click Add member.
Removing a user from a team
Removing a user from a team removes all permissions that team gave them.
Follow these steps:
Navigate to the teams management page.
Click on <team name>.
Click Remove next to the user.
Grant access to users to import a project
Make the user a member of any team with admin permissions, they will be granted access to import a project on that team.
Automating this process
You can manage teams more easily using our Single Sign-On features.
See also
- Organizations
General information about the organizations feature.
How to import private repositories
Note
This feature is only available on Read the Docs for Business.
You can grant access to private Git repositories using Read the Docs for Business. Here is how you set it up.
- ✅️ Logged in with GitHub, Bitbucket, or GitLab?
If you signed up or logged in to Read the Docs with your GitHub, Bitbucket, or GitLab credentials, all you have to do is to use the normal project import. Your Read the Docs account is connected to your Git provider and will let you choose from private Git repositories and configure them for you.
You can still use the below guide if you need to recreate SSH keys for a private repository.
- ⬇️ Logging in with another provider or email?
For all other Git provider setups, you will need to configure the Git repository manually.
Follow the steps below.
Importing your project manually
Git repositories aren’t automatically listed for setups that are not connected to GitHub, Bitbucket, or GitLab.

That is the reason why this guide is an extension of the manual Git repository setup, with the following exception:
In the Repository URL field, you need to provide the SSH version of your repository’s URL. It starts with
git@...
, for examplegit@github.com:readthedocs/readthedocs.org.git
.
After importing your project the build will fail, because Read the Docs doesn’t have access to clone your repository. To give access, you’ll need to add your project’s public SSH key to your VCS provider.
Copy your project’s public key
Next step is to locate a public SSH key which Read the Docs has automatically generated:

Going to the
tab of your project.Click on the fingerprint of the SSH key (it looks like
6d:ca:6d:ca:6d:ca:6d:ca
)Copy the text from the Public key section
Note
The private part of the SSH key is kept secret.
Add the public key to your project
Now that you have copied the public key generated by Read the Docs, you need to add it to your Git repository’s settings.
For GitHub, you can use deploy keys with read only access.
Go to your project on GitHub
Click on Settings
Click on Deploy Keys
Click on Add deploy key
Put a descriptive title and paste the public SSH key from your Read the Docs project
Click on Add key
For GitLab, you can use deploy keys with read only access.
Go to your project on GitLab
Click on Settings
Click on Repository
Expand the Deploy Keys section
Put a descriptive title and paste the public SSH key from your Read the Docs project
Click on Add key
For Bitbucket, you can use access keys with read only access.
Go your project on Bitbucket
Click on Repository Settings
Click on Access keys
Click on Add key
Put a descriptive label and paste the public SSH key from your Read the Docs project
Click on Add SSH key
For Azure DevOps, you can use SSH key authentication.
Go your Azure DevOps page
Click on User settings
Click on SSH public keys
Click on New key
Put a descriptive name and paste the public SSH key from your Read the Docs project
Click on Add
If you are not using any of the above providers, Read the Docs will still generate a pair of SSH keys. You’ll need to add the public SSH key from your Read the Docs project to your repository. Refer to your provider’s documentation for the steps required to do this.
Webhooks
Finally, since this is a manual project import:
Don’t forget to add the Read the Docs webhook!
To automatically trigger new builds on Read the Docs, you’ll need to manually add a webhook, see How to manually configure a Git repository integration.
How to use private Git submodules
Warning
This guide is for Business hosting.
If you are using private Git repositories and they also contain private Git submodules, you need to follow a few special steps.
Read the Docs uses SSH keys (with read only permissions) in order to clone private repositories. A SSH key is automatically generated and added to your main repository, but not to your submodules. In order to give Read the Docs access to clone your submodules you’ll need to add the public SSH key to each repository of your submodules.
Note
You can manage which submodules Read the Docs should clone using a configuration file. See submodules.
Make sure you are using
SSH
URLs for your submodules (git@github.com:readthedocs/readthedocs.org.git
for example) in your.gitmodules
file, nothttp
URLs.
GitHub
Since GitHub doesn’t allow you to reuse a deploy key across different repositories, you’ll need to use machine users to give read access to several repositories using only one SSH key.
Remove the SSH deploy key that was added to the main repository on GitHub
Go to your project on GitHub
Click on Settings
Click on Deploy Keys
Delete the key added by
Read the Docs Commercial (readthedocs.com)
Create a GitHub user and give it read only permissions to all the necessary repositories. You can do this by adding the account as:
Attach the public SSH key from your project on Read the Docs to the GitHub user you just created
Go to the user’s settings
Click on SSH and GPG keys
Click on New SSH key
Put a descriptive title and paste the public SSH key from your Read the Docs project
Click on Add SSH key
Azure DevOps
Azure DevOps does not have per-repository SSH keys, but keys can be added to a user instead. As long as this user has access to your main repository and all its submodules, Read the Docs can clone all the repositories with the same key.
Others
GitLab and Bitbucket allow you to reuse the same SSH key across different repositories. Since Read the Docs already added the public SSH key on your main repository, you only need to add it to each submodule repository.
See also
How to install private python packages
Warning
This guide is for Business hosting.
Read the Docs uses pip to install your Python packages. If you have private dependencies, you can install them from a private Git repository or a private repository manager.
From a Git repository
Pip supports installing packages from a Git repository using the URI form:
git+https://gitprovider.com/user/project.git@{version}
Or if your repository is private:
git+https://{token}@gitprovider.com/user/project.git@{version}
Where version
can be a tag, a branch, or a commit.
And token
is a personal access token with read only permissions from your provider.
To install the package, you need to add the URI in your requirements file. Pip will automatically expand environment variables in your URI, so you don’t have to hard code the token in the URI. See using environment variables in Read the Docs for more information.
Note
You have to use the POSIX format for variable names (only uppercase letters and _
are allowed),
and including a dollar sign and curly brackets around the name (${API_TOKEN}
)
for pip to be able to recognize them.
Below you can find how to get a personal access token from our supported providers. We will be using environment variables for the token.
GitHub
You need to create a personal access token with the repo
scope.
Check the GitHub documentation
on how to create a personal token.
URI example:
git+https://${GITHUB_USER}:${GITHUB_TOKEN}@github.com/user/project.git@{version}
Warning
GitHub doesn’t support tokens per repository. A personal token will grant read and write access to all repositories the user has access to. You can create a machine user to give read access only to the repositories you need.
GitLab
You need to create a deploy token with the read_repository
scope for the repository you want to install the package from.
Check the GitLab documentation
on how to create a deploy token.
URI example:
git+https://${GITLAB_TOKEN_USER}:${GITLAB_TOKEN}@gitlab.com/user/project.git@{version}
Here GITLAB_TOKEN_USER
is the user from the deploy token you created, not your GitLab user.
Bitbucket
You need to create an app password with Read repositories
permissions.
Check the Bitbucket documentation
on how to create an app password.
URI example:
git+https://${BITBUCKET_USER}:${BITBUCKET_APP_PASSWORD}@bitbucket.org/user/project.git@{version}'
Here BITBUCKET_USER
is your Bitbucket user.
Warning
Bitbucket doesn’t support app passwords per repository. An app password will grant read access to all repositories the user has access to.
From a repository manager other than PyPI
Pip by default will install your packages from PyPI.
If you are using a repository manager like pypiserver, or Nexus Repository,
you need to set the --index-url
option.
You have two ways of set that option:
Set the
PIP_INDEX_URL
environment variable in Read the Docs with the index URL. See https://pip.pypa.io/en/stable/reference/requirements-file-format#using-environment-variables.Put
--index-url=https://my-index-url.com/
at the top of your requirements file. See Requirements File Format.
Note
Check your repository manager’s documentation to obtain the appropriate index URL.
How-to guides: account management
- ⏩️ Managing your Read the Docs for Business subscription
Solving the most common tasks for managing Read the Docs subscriptions.
How-to guides: best practices
Over the years, we have become familiar with a number of methods that work well and which we consider best practice.
- ⏩️ Best practices for linking to your documentation
Documentation changes over time, and links and cross-references can become challenging manage for various reasons. Here is a set of best practices explaining and addressing these challenges.
- ⏩️ Deprecating content
Best practice for removing or deprecating documentation content.
- ⏩️ Creating reproducible builds
Every documentation project has dependencies that are required to build it. Using an unspecified versions of these dependencies means that your project can start breaking. In this guide, learn how to protect your project against breaking randomly. This is one of our most popular guides!
- ⏩️ Search engine optimization (SEO) for documentation projects
This article explains how documentation can be optimized to appear in search results, increasing traffic to your docs.
- ⏩️ Hiding a version
Learn how you can keep your entire version history online without overwhelming the reader with version choices.
How to deprecate content
When you deprecate a feature from your project, you may want to deprecate its docs as well, and stop your users from reading that content.
Deprecating content may sound as easy as delete it, but doing that will break existing links, and you don’t necessary want to make the content inaccessible. Here you’ll find some tips on how to use Read the Docs to deprecate your content progressively and in non harmful ways.
See also
- Best practices for linking to your documentation
More information about handling URL structures, renaming and removing content.
Deprecating versions
If you have multiple versions of your project, it makes sense to have its documentation versioned as well. For example, if you have the following versions and want to deprecate v1.
https://project.readthedocs.io/en/v1/
https://project.readthedocs.io/en/v2/
https://project.readthedocs.io/en/v3/
For cases like this you can hide a version. Hidden versions won’t be listed in the versions menu of your docs, and they will be listed in a robots.txt file to stop search engines of showing results for that version.
Users can still see all versions in the dashboard of your project. To hide a version go to your project and click on Versions > Edit, and mark the Hidden option. Check Version states for more information.
Note
If the versions of your project follow the semver convention, you can activate the Version warning option for your project. A banner with a warning and linking to the stable version will be shown on all versions that are lower than the stable one.
Deprecating pages
You may not always want to deprecate a version, but deprecate some pages. For example, if you have documentation about two APIs and you want to deprecate v1:
https://project.readthedocs.io/en/latest/api/v1.html
https://project.readthedocs.io/en/latest/api/v2.html
A simple way is just adding a warning at the top of the page, this will warn users visiting that page, but it won’t stop users from being redirected to that page from search results. You can add an entry of that page in a custom robots.txt file to avoid search engines of showing those results. For example:
# robots.txt
User-agent: *
Disallow: /en/latest/api/v1.html # Deprecated API
But your users will still see search results from that page if they use the search from your docs. With Read the Docs you can set a custom rank per pages. For example:
# .readthedocs.yaml
version: 2
search:
ranking:
api/v1.html: -1
This won’t hide results from that page, but it will give priority to results from other pages.
Tip
You can make use of Sphinx directives
(like warning
, deprecated
, versionchanged
)
or MkDocs admonitions
to warn your users about deprecated content.
Moving and deleting pages
After you have deprecated a feature for a while, you may want to get rid of its documentation, that’s OK, you don’t have to maintain that content forever. But be aware that users may have links of that page saved, and it will be frustrating and confusing for them to get a 404.
To solve that problem you can create a redirect to a page with a similar feature/content,
like redirecting to the docs of the v2 of your API when your users visit the deleted docs from v1,
this is a page redirect from /api/v1.html
to /api/v2.html
.
See Redirects.
How-to guides: troubleshooting problems
In the following guides, you can learn how to fix common problems using Read the Docs.
- ⏩️ Troubleshooting build errors
A list of common errors and resolutions encountered in the build process.
- ⏩️ Troubleshooting slow builds
A list of the most common issues that are slowing down builds. Even if you are not facing any immediate performance issues, it’s always good to be familiar with the most common ones.
Troubleshooting build errors
Tip
Please help us keep this section updated and contribute your own error resolutions, performance improvements, etc. Send in your helpful comments or ideas 💡 to support@readthedocs.org or contribute directly by clicking Edit on GitHub in the top right corner of this page.
This guide provides some common errors and resolutions encountered in the build process.
Git errors
In the examples below, we use github.com
, however error messages are similar for GitLab, Bitbucket etc.
terminal prompts disabled
fatal: could not read Username for 'https://github.com': terminal prompts disabled
Resolution: This error can be quite misleading. It usually occurs when a repository could not be found because of a typo in the reposistory name or because the repository has been deleted. Verify your repository URL in Admin > Settings.
This error also occurs if you have changed a public
repository to private
and you are using https://
in your git repository URL.
Note
To use private repositories, you need a plan on Read the Docs for Business.
error: pathspec
error: pathspec 'main' did not match any file(s) known to git
Resolution: A specified branch does not exist in the git repository.
This might be because the git repository was recently created (and has no commits nor branches) or because the default branch has changed name. If for instance, the default branch on GitHub changed from master
to main
, you need to visit Admin > Settings to change the name of the default branch that Read the Docs expects to find when cloning the repository.
Permission denied (publickey)
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Resolution: The git repository URL points to a repository, user account or organization that Read the Docs does not have credentials for. Verify that the public SSH key from your Read the Docs project is installed as a deploy key on your VCS (GitHub/GitLab/Bitbucket etc):
Navigate to Admin > SSH Keys
Copy the contents of the public key.
Ensure that the key exists as a deploy key at your VCS provider. Here are direct links to access settings for verifying and changing deploy keys - customize the URLs for your VCS host and repository details:
https://github.com/<username>/<repo>/settings/keys
https://gitlab.com/<username>/<repo>/-/settings/repository
https://bitbucket.org/<username>/<repo>/admin/access-keys/
ERROR: Repository not found.
ERROR: Repository not found.
fatal: Could not read from remote repository.
Resolution: This error usually occurs on private git repositories that no longer have the public SSH key from their Read the Docs project installed as a deploy key.
Navigate to Admin > SSH Keys
Copy the contents of the public key.
Ensure that the key exists as a deploy key at your VCS provider. Here are direct links to access settings for verifying and changing deploy keys - customize the URLs for your VCS host and repository details:
https://github.com/<username>/<repo>/settings/keys
https://gitlab.com/<username>/<repo>/-/settings/repository
https://bitbucket.org/<username>/<repo>/admin/access-keys/
This error is rare for public repositories. If your repository is public and you see this error, it may be because you have specified a wrong domain or forgotten a component in the path.
Troubleshooting slow builds
This page contains a list of the most common issues that are slowing down builds.
In case you are waiting a long time for your builds to finish or your builds are terminated by exceeding general resource limits, this troubleshooting guide will help you resolve some of the most common issues causing slow builds. Even if you are not facing any immediate performance issues, it’s always good to be familiar with the most common ones.
Build resources on Read the Docs are limited to make sure that users don’t overwhelm our build systems. The current build limits can be found on our Build resources reference.
Tip
Please help us keep this section updated and contribute your own error resolutions, performance improvements, etc. Send in your helpful comments or ideas 💡 to support@readthedocs.org or contribute directly by clicking Edit on GitHub in the top right corner of this page.
Reduce formats you’re building
You can change the formats of docs that you’re building with our Configuration file overview, see formats.
In particular, the htmlzip
takes up a decent amount of memory and time,
so disabling that format might solve your problem.
Reduce documentation build dependencies
A lot of projects reuse their requirements file for their documentation builds. If there are extra packages that you don’t need for building docs, you can create a custom requirements file just for documentation. This should speed up your documentation builds, as well as reduce your memory footprint.
Use mamba instead of conda
If you need conda packages to build your documentation, you can use mamba as a drop-in replacement to conda, which requires less memory and is noticeably faster.
Document Python modules API statically
If you are installing a lot of Python dependencies just to document your Python modules API using sphinx.ext.autodoc
,
you can give a try to sphinx-autoapi Sphinx’s extension instead which should produce the exact same output but running statically.
This could drastically reduce the memory and bandwidth required to build your docs.
Requests more resources
If you still have problems building your documentation, we can increase build limits on a per-project basis, sending an email to support@readthedocs.org providing a good reason why your documentation needs more resources.
Public REST API
This section of the documentation details the public REST API. Useful to get details of projects, builds, versions, and other resources.
API v3
The Read the Docs API uses REST. JSON is returned by all API responses including errors and HTTP response status codes are to designate success and failure.
Resources
This section shows all the resources that are currently available in APIv3. There are some URL attributes that applies to all of these resources:
- ?fields=:
Specify which fields are going to be returned in the response.
- ?omit=:
Specify which fields are going to be omitted from the response.
- ?expand=:
Some resources allow to expand/add extra fields on their responses (see Project details for example).
Tip
You can browse the full API by accessing its root URL: https://readthedocs.org/api/v3/
Tip
You can browse the full API by accessing its root URL: https://readthedocs.com/api/v3/
Note
If you are using Read the Docs for Business take into account that you will need to replace https://readthedocs.org/ by https://readthedocs.com/ in all the URLs used in the following examples.
Projects
Projects list
- GET /api/v3/projects/
Retrieve a list of all the projects for the current logged in user.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/
import requests URL = 'https://readthedocs.org/api/v3/projects/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 25, "next": "/api/v3/projects/?limit=10&offset=10", "previous": null, "results": [{ "id": 12345, "name": "Pip", "slug": "pip", "created": "2010-10-23T18:12:31+00:00", "modified": "2018-12-11T07:21:11+00:00", "language": { "code": "en", "name": "English" }, "programming_language": { "code": "py", "name": "Python" }, "repository": { "url": "https://github.com/pypa/pip", "type": "git" }, "default_version": "stable", "default_branch": "master", "subproject_of": null, "translation_of": null, "urls": { "documentation": "http://pip.pypa.io/en/stable/", "home": "https://pip.pypa.io/" }, "tags": [ "distutils", "easy_install", "egg", "setuptools", "virtualenv" ], "users": [ { "username": "dstufft" } ], "active_versions": { "stable": "{VERSION}", "latest": "{VERSION}", "19.0.2": "{VERSION}" }, "_links": { "_self": "/api/v3/projects/pip/", "versions": "/api/v3/projects/pip/versions/", "builds": "/api/v3/projects/pip/builds/", "subprojects": "/api/v3/projects/pip/subprojects/", "superproject": "/api/v3/projects/pip/superproject/", "redirects": "/api/v3/projects/pip/redirects/", "translations": "/api/v3/projects/pip/translations/" } }] }
- Query Parameters:
name (string) – return projects with matching name
slug (string) – return projects with matching slug
language (string) – language code as
en
,es
,ru
, etc.programming_language (string) – programming language code as
py
,js
, etc.
The
results
in response is an array of project data, which is same asGET /api/v3/projects/(string:project_slug)/
.Note
Read the Docs for Business, also accepts
- Query Parameters:
expand (string) – with
organization
andteams
.
Project details
- GET /api/v3/projects/(string: project_slug)/
Retrieve details of a single project.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "id": 12345, "name": "Pip", "slug": "pip", "created": "2010-10-23T18:12:31+00:00", "modified": "2018-12-11T07:21:11+00:00", "language": { "code": "en", "name": "English" }, "programming_language": { "code": "py", "name": "Python" }, "repository": { "url": "https://github.com/pypa/pip", "type": "git" }, "default_version": "stable", "default_branch": "master", "subproject_of": null, "translation_of": null, "urls": { "documentation": "http://pip.pypa.io/en/stable/", "home": "https://readthedocs.org/projects/pip/", "downloads": "https://readthedocs.org/projects/pip/downloads/", "builds": "https://readthedocs.org/projects/pip/builds/", "versions": "https://readthedocs.org/projects/pip/versions/", }, "tags": [ "distutils", "easy_install", "egg", "setuptools", "virtualenv" ], "users": [ { "username": "dstufft" } ], "active_versions": { "stable": "{VERSION}", "latest": "{VERSION}", "19.0.2": "{VERSION}" }, "privacy_level": "public", "external_builds_privacy_level": "public", "versioning_scheme": "multiple_versions_with_translations", "_links": { "_self": "/api/v3/projects/pip/", "versions": "/api/v3/projects/pip/versions/", "builds": "/api/v3/projects/pip/builds/", "subprojects": "/api/v3/projects/pip/subprojects/", "superproject": "/api/v3/projects/pip/superproject/", "redirects": "/api/v3/projects/pip/redirects/", "translations": "/api/v3/projects/pip/translations/" } }
- Query Parameters:
expand (string) – allows to add/expand some extra fields in the response. Allowed values are
active_versions
,active_versions.last_build
andactive_versions.last_build.config
. Multiple fields can be passed separated by commas.
Note
versioning_scheme
can be one of the following values:multiple_versions_with_translations
multiple_versions_without_translations
single_version_without_translations
Note
Read the Docs for Business, also accepts
- Query Parameters:
expand (string) – with
organization
andteams
.
Note
The
single_version
attribute is deprecated, useversioning_scheme
instead.
Project create
- POST /api/v3/projects/
Import a project under authenticated user.
Example request:
$ curl \ -X POST \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/ \ -H "Content-Type: application/json" \ -d @body.json
import requests import json URL = 'https://readthedocs.org/api/v3/projects/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} data = json.load(open('body.json', 'rb')) response = requests.post( URL, json=data, headers=HEADERS, ) print(response.json())
The content of
body.json
is like,{ "name": "Test Project", "repository": { "url": "https://github.com/readthedocs/template", "type": "git" }, "homepage": "http://template.readthedocs.io/", "programming_language": "py", "language": "es", "privacy_level": "public", "external_builds_privacy_level": "public", "tags": [ "automation", "sphinx" ] }
Example response:
Note
Read the Docs for Business, also accepts
- Request JSON Object:
organization (string) – required organization’s slug under the project will be imported.
teams (string) – optional teams’ slugs the project will belong to.
Note
Privacy levels are only available in Read the Docs for Business.
Project update
- PATCH /api/v3/projects/(string: project_slug)/
Update an existing project.
Example request:
$ curl \ -X PATCH \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/ \ -H "Content-Type: application/json" \ -d @body.json
import requests import json URL = 'https://readthedocs.org/api/v3/projects/pip/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} data = json.load(open('body.json', 'rb')) response = requests.patch( URL, json=data, headers=HEADERS, ) print(response.json())
The content of
body.json
is like,{ "name": "New name for the project", "repository": { "url": "https://github.com/readthedocs/readthedocs.org", "type": "git" }, "language": "ja", "programming_language": "py", "homepage": "https://readthedocs.org/", "tags" : [ "extension", "mkdocs" ] "default_version": "v0.27.0", "default_branch": "develop", "analytics_code": "UA000000", "analytics_disabled": false, "versioning_scheme": "multiple_versions_with_translations", "external_builds_enabled": true, "privacy_level": "public", "external_builds_privacy_level": "public" }
Note
Adding
tags
will replace existing tags with the new list, and if omitted won’t change the tags.Note
Privacy levels are only available in Read the Docs for Business.
- Status Codes:
204 No Content – Updated successfully
Versions
Versions are different versions of the same project documentation.
The versions for a given project can be viewed in a project’s version page. For example, here is the Pip project’s version page. See Versions for more information.
Versions listing
- GET /api/v3/projects/(string: project_slug)/versions/
Retrieve a list of all versions for a project.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/versions/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 25, "next": "/api/v3/projects/pip/versions/?limit=10&offset=10", "previous": null, "results": ["VERSION"] }
- Query Parameters:
active (boolean) – return only active versions
built (boolean) – return only built versions
privacy_level (string) – return versions with specific privacy level (
public
orprivate
)slug (string) – return versions with matching slug
type (string) – return versions with specific type (
branch
ortag
)verbose_name (string) – return versions with matching version name
Version detail
- GET /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/
Retrieve details of a single version.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/stable/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/versions/stable/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "id": 71652437, "slug": "stable", "verbose_name": "stable", "identifier": "3a6b3995c141c0888af6591a59240ba5db7d9914", "ref": "19.0.2", "built": true, "active": true, "aliases": ["VERSION"], "hidden": false, "type": "tag", "last_build": "{BUILD}", "privacy_level": "public", "downloads": { "pdf": "https://pip.readthedocs.io/_/downloads/pdf/pip/stable/", "htmlzip": "https://pip.readthedocs.io/_/downloads/htmlzip/pip/stable/", "epub": "https://pip.readthedocs.io/_/downloads/epub/pip/stable/" }, "urls": { "dashboard": { "edit": "https://readthedocs.org/dashboard/pip/version/stable/edit/" }, "documentation": "https://pip.pypa.io/en/stable/", "vcs": "https://github.com/pypa/pip/tree/19.0.2" }, "_links": { "_self": "/api/v3/projects/pip/versions/stable/", "builds": "/api/v3/projects/pip/versions/stable/builds/", "project": "/api/v3/projects/pip/" } }
- Response JSON Object:
ref (string) – the version slug where the
stable
version points to.null
when it’s not the stable version.built (boolean) – the version has at least one successful build.
- Query Parameters:
expand (string) – allows to add/expand some extra fields in the response. Allowed values are
last_build
andlast_build.config
. Multiple fields can be passed separated by commas.
Version update
- PATCH /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/
Update a version.
When a version is deactivated, its documentation is removed, and when it’s activated, a new build is triggered.
Updates to a version also invalidates its CDN cache.
Example request:
$ curl \ -X PATCH \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/0.23/ \ -H "Content-Type: application/json" \ -d @body.json
import requests import json URL = 'https://readthedocs.org/api/v3/projects/pip/versions/0.23/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} data = json.load(open('body.json', 'rb')) response = requests.patch( URL, json=data, headers=HEADERS, ) print(response.json())
The content of
body.json
is like,{ "active": true, "hidden": false, "privacy_level": "public" }
- Status Codes:
204 No Content – Updated successfully
Note
Privacy levels are only available in Read the Docs for Business.
Builds
Builds are created by Read the Docs whenever a Project
has its documentation built.
Frequently this happens automatically via a web hook but can be triggered manually.
Builds can be viewed in the build page for a project. For example, here is Pip’s build page. See Build process overview for more information.
Build details
- GET /api/v3/projects/(str: project_slug)/builds/(int: build_id)/
Retrieve details of a single build for a project.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/builds/8592686/?expand=config
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/builds/8592686/?expand=config' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "id": 8592686, "version": "latest", "project": "pip", "created": "2018-06-19T15:15:59+00:00", "finished": "2018-06-19T15:16:58+00:00", "duration": 59, "state": { "code": "finished", "name": "Finished" }, "success": true, "error": null, "commit": "6f808d743fd6f6907ad3e2e969c88a549e76db30", "config": { "version": "1", "formats": [ "htmlzip", "epub", "pdf" ], "python": { "version": 3, "install": [ { "requirements": ".../stable/tools/docs-requirements.txt" } ], }, "conda": null, "build": { "image": "readthedocs/build:latest" }, "doctype": "sphinx_htmldir", "sphinx": { "builder": "sphinx_htmldir", "configuration": ".../stable/docs/html/conf.py", "fail_on_warning": false }, "mkdocs": { "configuration": null, "fail_on_warning": false }, "submodules": { "include": "all", "exclude": [], "recursive": true } }, "_links": { "_self": "/api/v3/projects/pip/builds/8592686/", "project": "/api/v3/projects/pip/", "version": "/api/v3/projects/pip/versions/latest/" } }
- Response JSON Object:
created (string) – The ISO-8601 datetime when the build was created.
finished (string) – The ISO-8601 datetime when the build has finished.
duration (integer) – The length of the build in seconds.
state (string) – The state of the build (one of
triggered
,building
,installing
,cloning
,finished
orcancelled
)error (string) – An error message if the build was unsuccessful
- Query Parameters:
expand (string) – allows to add/expand some extra fields in the response. Allowed value is
config
.
Builds listing
- GET /api/v3/projects/(str: project_slug)/builds/
Retrieve list of all the builds on this project.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/builds/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/builds/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 15, "next": "/api/v3/projects/pip/builds?limit=10&offset=10", "previous": null, "results": ["BUILD"] }
- Query Parameters:
commit (string) – commit hash to filter the builds returned by commit
running (boolean) – filter the builds that are currently building/running
Build triggering
- POST /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/builds/
Trigger a new build for the
version_slug
version of this project.Example request:
$ curl \ -X POST \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/latest/builds/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/versions/latest/builds/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.post(URL, headers=HEADERS) print(response.json())
Example response:
{ "build": "{BUILD}", "project": "{PROJECT}", "version": "{VERSION}" }
- Status Codes:
202 Accepted – the build was triggered
Subprojects
Projects can be configured in a nested manner, by configuring a project as a subproject of another project. This allows for documentation projects to share a search index and a namespace or custom domain, but still be maintained independently. See Subprojects for more information.
Subproject details
- GET /api/v3/projects/(str: project_slug)/subprojects/(str: alias_slug)/
Retrieve details of a subproject relationship.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "alias": "subproject-alias", "child": ["PROJECT"], "_links": { "parent": "/api/v3/projects/pip/" } }
Subprojects listing
- GET /api/v3/projects/(str: project_slug)/subprojects/
Retrieve a list of all sub-projects for a project.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/subprojects/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 25, "next": "/api/v3/projects/pip/subprojects/?limit=10&offset=10", "previous": null, "results": ["SUBPROJECT RELATIONSHIP"] }
Subproject create
- POST /api/v3/projects/(str: project_slug)/subprojects/
Create a subproject relationship between two projects.
Example request:
$ curl \ -X POST \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/ \ -H "Content-Type: application/json" \ -d @body.json
import requests import json URL = 'https://readthedocs.org/api/v3/projects/pip/subprojects/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} data = json.load(open('body.json', 'rb')) response = requests.post( URL, json=data, headers=HEADERS, ) print(response.json())
The content of
body.json
is like,{ "child": "subproject-child-slug", "alias": "subproject-alias" }
Note
child
must be a project that you have access to. Or if you are using Business hosting, additionally the project must be under the same organization as the parent project.Example response:
- Response JSON Object:
child (string) – slug of the child project in the relationship.
alias (string) – optional slug alias to be used in the URL (e.g
/projects/<alias>/en/latest/
). If not provided, child project’s slug is used as alias.
- Status Codes:
201 Created – Subproject created successfully
Subproject delete
- DELETE /api/v3/projects/(str: project_slug)/subprojects/(str: alias_slug)/
Delete a subproject relationship.
Example request:
$ curl \ -X DELETE \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.delete(URL, headers=HEADERS) print(response.json())
- Status Codes:
204 No Content – Subproject deleted successfully
Translations
Translations are the same version of a Project in a different language. See Localization and Internationalization for more information.
Translations listing
- GET /api/v3/projects/(str: project_slug)/translations/
Retrieve a list of all translations for a project.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/translations/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/translations/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 25, "next": "/api/v3/projects/pip/translations/?limit=10&offset=10", "previous": null, "results": [{ "id": 12345, "name": "Pip", "slug": "pip", "created": "2010-10-23T18:12:31+00:00", "modified": "2018-12-11T07:21:11+00:00", "language": { "code": "en", "name": "English" }, "programming_language": { "code": "py", "name": "Python" }, "repository": { "url": "https://github.com/pypa/pip", "type": "git" }, "default_version": "stable", "default_branch": "master", "subproject_of": null, "translation_of": null, "urls": { "documentation": "http://pip.pypa.io/en/stable/", "home": "https://pip.pypa.io/" }, "tags": [ "distutils", "easy_install", "egg", "setuptools", "virtualenv" ], "users": [ { "username": "dstufft" } ], "active_versions": { "stable": "{VERSION}", "latest": "{VERSION}", "19.0.2": "{VERSION}" }, "_links": { "_self": "/api/v3/projects/pip/", "versions": "/api/v3/projects/pip/versions/", "builds": "/api/v3/projects/pip/builds/", "subprojects": "/api/v3/projects/pip/subprojects/", "superproject": "/api/v3/projects/pip/superproject/", "redirects": "/api/v3/projects/pip/redirects/", "translations": "/api/v3/projects/pip/translations/" } }] }
The
results
in response is an array of project data, which is same asGET /api/v3/projects/(string:project_slug)/
.
Redirects
Redirects allow the author to redirect an old URL of the documentation to a new one. This is useful when pages are moved around in the structure of the documentation set. See Redirects for more information.
Redirect details
- GET /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/
Retrieve details of a single redirect for a project.
Example request
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/redirects/1/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response
{ "pk": 1, "created": "2019-04-29T10:00:00Z", "modified": "2019-04-29T12:00:00Z", "project": "pip", "from_url": "/docs/", "to_url": "/documentation/", "type": "page", "http_status": 302, "description": "", "enabled": true, "force": false, "position": 0, "_links": { "_self": "/api/v3/projects/pip/redirects/1/", "project": "/api/v3/projects/pip/" } }
Redirects listing
- GET /api/v3/projects/(str: project_slug)/redirects/
Retrieve list of all the redirects for this project.
Example request
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/redirects/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response
{ "count": 25, "next": "/api/v3/projects/pip/redirects/?limit=10&offset=10", "previous": null, "results": ["REDIRECT"] }
Redirect create
- POST /api/v3/projects/(str: project_slug)/redirects/
Create a redirect for this project.
Example request:
$ curl \ -X POST \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/ \ -H "Content-Type: application/json" \ -d @body.json
import requests import json URL = 'https://readthedocs.org/api/v3/projects/pip/redirects/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} data = json.load(open('body.json', 'rb')) response = requests.post( URL, json=data, headers=HEADERS, ) print(response.json())
The content of
body.json
is like,{ "from_url": "/docs/", "to_url": "/documentation/", "type": "page", "position": 0, }
Note
type
can be one ofpage
,exact
,clean_url_to_html
andhtml_to_clean_url
.Depending on the
type
of the redirect, some fields may not be needed:page
andexact
types requirefrom_url
andto_url
.clean_url_to_html
andhtml_to_clean_url
types do not requirefrom_url
andto_url
.
Position starts at 0 and is used to order redirects.
Example response:
- Status Codes:
201 Created – redirect created successfully
Redirect update
- PUT /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/
Update a redirect for this project.
Example request:
$ curl \ -X PUT \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/ \ -H "Content-Type: application/json" \ -d @body.json
import requests import json URL = 'https://readthedocs.org/api/v3/projects/pip/redirects/1/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} data = json.load(open('body.json', 'rb')) response = requests.put( URL, json=data, headers=HEADERS, ) print(response.json())
The content of
body.json
is like,{ "from_url": "/docs/", "to_url": "/documentation.html", "type": "page" }
Note
If the position of the redirect is changed, it will be inserted in the new position and the other redirects will be reordered.
Example response:
Redirect delete
- DELETE /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/
Delete a redirect for this project.
Example request:
$ curl \ -X DELETE \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/redirects/1/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.delete(URL, headers=HEADERS) print(response.json())
- Status Codes:
204 No Content – Redirect deleted successfully
Environment variables
Environment variables are variables that you can define for your project. These variables are used in the build process when building your documentation. They are for example useful to define secrets in a safe way that can be used by your documentation to build properly. Environment variables can also be made public, allowing for them to be used in PR builds. See Environment variable overview.
Environment variable details
- GET /api/v3/projects/(str: project_slug)/environmentvariables/(int: environmentvariable_id)/
Retrieve details of a single environment variable for a project.
Example request
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response
{ "_links": { "_self": "https://readthedocs.org/api/v3/projects/project/environmentvariables/1/", "project": "https://readthedocs.org/api/v3/projects/project/" }, "created": "2019-04-29T10:00:00Z", "modified": "2019-04-29T12:00:00Z", "pk": 1, "project": "project", "public": false, "name": "ENVVAR" }
Environment variables listing
- GET /api/v3/projects/(str: project_slug)/environmentvariables/
Retrieve list of all the environment variables for this project.
Example request
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/environmentvariables/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response
{ "count": 15, "next": "/api/v3/projects/pip/environmentvariables/?limit=10&offset=10", "previous": null, "results": ["ENVIRONMENTVARIABLE"] }
Environment variable create
- POST /api/v3/projects/(str: project_slug)/environmentvariables/
Create an environment variable for this project.
Example request:
$ curl \ -X POST \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/ \ -H "Content-Type: application/json" \ -d @body.json
import requests import json URL = 'https://readthedocs.org/api/v3/projects/pip/environmentvariables/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} data = json.load(open('body.json', 'rb')) response = requests.post( URL, json=data, headers=HEADERS, ) print(response.json())
The content of
body.json
is like,{ "name": "MYVAR", "value": "My secret value" }
Example response:
See Environment Variable details
- Status Codes:
201 Created – Environment variable created successfully
Environment variable delete
- DELETE /api/v3/projects/(str: project_slug)/environmentvariables/(int: environmentvariable_id)/
Delete an environment variable for this project.
Example request:
$ curl \ -X DELETE \ -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/
import requests URL = 'https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.delete(URL, headers=HEADERS) print(response.json())
- Request Headers:
Authorization – token to authenticate.
- Status Codes:
204 No Content – Environment variable deleted successfully
Organizations
Note
The /api/v3/organizations/
endpoint is only available in Read the Docs for Business currently.
We plan to have organizations on Read the Docs Community in a near future and we will add support for this endpoint at the same time.
Organizations list
- GET /api/v3/organizations/
Retrieve a list of all the organizations for the current logged in user.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/
import requests URL = 'https://readthedocs.com/api/v3/organizations/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 1, "next": null, "previous": null, "results": [ { "_links": { "_self": "https://readthedocs.com/api/v3/organizations/pypa/", "projects": "https://readthedocs.com/api/v3/organizations/pypa/projects/" }, "created": "2019-02-22T21:54:52.768630Z", "description": "", "disabled": false, "email": "pypa@psf.org", "modified": "2020-07-02T12:35:32.418423Z", "name": "Python Package Authority", "owners": [ { "username": "dstufft" } ], "slug": "pypa", "url": "https://github.com/pypa/" } }
Organization details
- GET /api/v3/organizations/(string: organization_slug)/
Retrieve details of a single organization.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/pypa/
import requests URL = 'https://readthedocs.com/api/v3/organizations/pypa/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "_links": { "_self": "https://readthedocs.com/api/v3/organizations/pypa/", "projects": "https://readthedocs.com/api/v3/organizations/pypa/projects/" }, "created": "2019-02-22T21:54:52.768630Z", "description": "", "disabled": false, "email": "pypa@psf.com", "modified": "2020-07-02T12:35:32.418423Z", "name": "Python Package Authority", "owners": [ { "username": "dstufft" } ], "slug": "pypa", "url": "https://github.com/pypa/" }
Organization projects list
- GET /api/v3/organizations/(string: organization_slug)/projects/
Retrieve list of projects under an organization.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/pypa/projects/
import requests URL = 'https://readthedocs.com/api/v3/organizations/pypa/projects/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 1, "next": null, "previous": null, "results": [ { "_links": { "_self": "https://readthedocs.com/api/v3/projects/pypa-pip/", "builds": "https://readthedocs.com/api/v3/projects/pypa-pip/builds/", "environmentvariables": "https://readthedocs.com/api/v3/projects/pypa-pip/environmentvariables/", "redirects": "https://readthedocs.com/api/v3/projects/pypa-pip/redirects/", "subprojects": "https://readthedocs.com/api/v3/projects/pypa-pip/subprojects/", "superproject": "https://readthedocs.com/api/v3/projects/pypa-pip/superproject/", "translations": "https://readthedocs.com/api/v3/projects/pypa-pip/translations/", "versions": "https://readthedocs.com/api/v3/projects/pypa-pip/versions/" }, "created": "2019-02-22T21:59:13.333614Z", "default_branch": "master", "default_version": "latest", "homepage": null, "id": 2797, "language": { "code": "en", "name": "English" }, "modified": "2019-08-08T16:27:25.939531Z", "name": "pip", "programming_language": { "code": "py", "name": "Python" }, "repository": { "type": "git", "url": "https://github.com/pypa/pip" }, "slug": "pypa-pip", "subproject_of": null, "tags": [], "translation_of": null, "urls": { "builds": "https://readthedocs.com/projects/pypa-pip/builds/", "documentation": "https://pypa-pip.readthedocs-hosted.com/en/latest/", "home": "https://readthedocs.com/projects/pypa-pip/", "versions": "https://readthedocs.com/projects/pypa-pip/versions/" } } ] }
Remote organizations
Remote organizations are the VCS organizations connected via
GitHub
, GitLab
and Bitbucket
.
Remote organization listing
- GET /api/v3/remote/organizations/
Retrieve a list of all Remote Organizations for the authenticated user.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/remote/organizations/
import requests URL = 'https://readthedocs.org/api/v3/remote/organizations/' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 20, "next": "api/v3/remote/organizations/?limit=10&offset=10", "previous": null, "results": [ { "avatar_url": "https://avatars.githubusercontent.com/u/12345?v=4", "created": "2019-04-29T10:00:00Z", "modified": "2019-04-29T12:00:00Z", "name": "Organization Name", "pk": 1, "slug": "organization", "url": "https://github.com/organization", "vcs_provider": "github" } ] }
The
results
in response is an array of remote organizations data.- Query Parameters:
name (string) – return remote organizations with containing the name
vcs_provider (string) – return remote organizations for specific vcs provider (
github
,gitlab
orbitbucket
)
- Request Headers:
Authorization – token to authenticate.
Remote repositories
Remote repositories are the importable repositories connected via
GitHub
, GitLab
and Bitbucket
.
Remote repository listing
- GET /api/v3/remote/repositories/
Retrieve a list of all Remote Repositories for the authenticated user.
Example request:
$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/remote/repositories/?expand=projects,remote_organization
import requests URL = 'https://readthedocs.org/api/v3/remote/repositories/?expand=projects,remote_organization' TOKEN = '<token>' HEADERS = {'Authorization': f'token {TOKEN}'} response = requests.get(URL, headers=HEADERS) print(response.json())
Example response:
{ "count": 20, "next": "api/v3/remote/repositories/?expand=projects,remote_organization&limit=10&offset=10", "previous": null, "results": [ { "remote_organization": { "avatar_url": "https://avatars.githubusercontent.com/u/12345?v=4", "created": "2019-04-29T10:00:00Z", "modified": "2019-04-29T12:00:00Z", "name": "Organization Name", "pk": 1, "slug": "organization", "url": "https://github.com/organization", "vcs_provider": "github" }, "project": [{ "id": 12345, "name": "project", "slug": "project", "created": "2010-10-23T18:12:31+00:00", "modified": "2018-12-11T07:21:11+00:00", "language": { "code": "en", "name": "English" }, "programming_language": { "code": "py", "name": "Python" }, "repository": { "url": "https://github.com/organization/project", "type": "git" }, "default_version": "stable", "default_branch": "master", "subproject_of": null, "translation_of": null, "urls": { "documentation": "http://project.readthedocs.io/en/stable/", "home": "https://readthedocs.org/projects/project/" }, "tags": [ "test" ], "users": [ { "username": "dstufft" } ], "_links": { "_self": "/api/v3/projects/project/", "versions": "/api/v3/projects/project/versions/", "builds": "/api/v3/projects/project/builds/", "subprojects": "/api/v3/projects/project/subprojects/", "superproject": "/api/v3/projects/project/superproject/", "redirects": "/api/v3/projects/project/redirects/", "translations": "/api/v3/projects/project/translations/" } }], "avatar_url": "https://avatars3.githubusercontent.com/u/test-organization?v=4", "clone_url": "https://github.com/organization/project.git", "created": "2019-04-29T10:00:00Z", "description": "This is a test project.", "full_name": "organization/project", "html_url": "https://github.com/organization/project", "modified": "2019-04-29T12:00:00Z", "name": "project", "pk": 1, "ssh_url": "git@github.com:organization/project.git", "vcs": "git", "vcs_provider": "github", "default_branch": "master", "private": false, "admin": true } ] }
The
results
in response is an array of remote repositories data.- Query Parameters:
name (string) – return remote repositories containing the name
full_name (string) – return remote repositories containing the full name (it inclues the username/organization the project belongs to)
vcs_provider (string) – return remote repositories for specific vcs provider (
github
,gitlab
orbitbucket
)organization (string) – return remote repositories for specific remote organization (using remote organization
slug
)expand (string) – allows to add/expand some extra fields in the response. Allowed values are
projects
andremote_organization
. Multiple fields can be passed separated by commas.
- Request Headers:
Authorization – token to authenticate.
Embed
- GET /api/v3/embed/
Retrieve HTML-formatted content from documentation page or section. Read How to embed content from your documentation to know more about how to use this endpoint.
Warning
The content will be returned as is, without any sanitization or escaping. You should not include content from arbitrary projects, or projects you do not trust.
Example request:
curl https://readthedocs.org/api/v3/embed/?url=https://docs.readthedocs.io/en/latest/features.html%23read-the-docs-features
Example response:
{ "url": "https://docs.readthedocs.io/en/latest/features.html#read-the-docs-features", "fragment": "read-the-docs-features", "content": "<div class=\"section\" id=\"read-the-docs-features\">\n<h1>Read the Docs ...", "external": false }
- Response JSON Object:
url (string) – URL of the document.
fragment (string) – fragmet part of the URL used to query the page.
content (string) – HTML content of the section.
external (string) – whether or not the page is hosted on Read the Docs or externally.
- Query Parameters:
url (string) – full URL of the document (with optional fragment) to fetch content from.
doctool (string) – optional documentation tool key name used to generate the target documentation (currently, only
sphinx
is accepted)doctoolversion (string) – optional documentation tool version used to generate the target documentation (e.g.
4.2.0
).
Note
Passing
?doctool=
and?doctoolversion=
may improve the response, since the endpoint will know more about the exact structure of the HTML and can make better decisions.
Additional APIs
API v2
The Read the Docs API uses REST. JSON is returned by all API responses including errors and HTTP response status codes are to designate success and failure.
Warning
API v2 is planned to be deprecated soon, though we have not yet set a time frame for deprecation yet. We will alert users with our plans when we do.
For now, API v2 is still used by some legacy application operations still, but we highly recommend Read the Docs users use API v3 instead.
Some improvements in API v3 are:
Token based authentication
Easier to use URLs which no longer use numerical ids
More common user actions are exposed through the API
Improved error reporting
See its full documentation at API v3.
Resources
Projects
Projects are the main building block of Read the Docs. Projects are built when there are changes to the code and the resulting documentation is hosted and served by Read the Docs.
As an example, this documentation is part of the Docs project which has documentation at https://docs.readthedocs.io.
You can always view your Read the Docs projects in your project dashboard.
Project list
- GET /api/v2/project/
Retrieve a list of all Read the Docs projects.
Example request:
curl https://readthedocs.org/api/v2/project/?slug=pip
Example response:
{ "count": 1, "next": null, "previous": null, "results": [PROJECTS] }
- Response JSON Object:
next (string) – URI for next set of Projects.
previous (string) – URI for previous set of Projects.
count (integer) – Total number of Projects.
results (array) – Array of
Project
objects.
- Query Parameters:
slug (string) – Narrow the results by matching the exact project slug
Project details
- GET /api/v2/project/(int: id)/
Retrieve details of a single project.
{ "id": 6, "name": "Pip", "slug": "pip", "programming_language": "py", "default_version": "stable", "default_branch": "master", "repo_type": "git", "repo": "https://github.com/pypa/pip", "description": "Pip Installs Packages.", "language": "en", "documentation_type": "sphinx_htmldir", "canonical_url": "http://pip.pypa.io/en/stable/", "users": [USERS] }
- Response JSON Object:
id (integer) – The ID of the project
name (string) – The name of the project.
slug (string) – The project slug (used in the URL).
programming_language (string) – The programming language of the project (eg. “py”, “js”)
default_version (string) – The default version of the project (eg. “latest”, “stable”, “v3”)
default_branch (string) – The default version control branch
repo_type (string) – Version control repository of the project
repo (string) – The repository URL for the project
description (string) – An RST description of the project
language (string) – The language code of this project
documentation_type (string) – An RST description of the project
canonical_url (string) – The canonical URL of the default docs
users (array) – Array of
User
IDs who are maintainers of the project.
- Status Codes:
200 OK – no error
404 Not Found – There is no
Project
with this ID
Project versions
- GET /api/v2/project/(int: id)/active_versions/
Retrieve a list of active versions (eg. “latest”, “stable”, “v1.x”) for a single project.
{ "versions": [VERSION, VERSION, ...] }
- Response JSON Object:
versions (array) – Version objects for the given
Project
See the Version detail call for the format of the
Version
object.
Versions
Versions are different versions of the same project documentation
The versions for a given project can be viewed in a project’s version screen. For example, here is the Pip project’s version screen.
Version list
- GET /api/v2/version/
Retrieve a list of all Versions for all projects
{ "count": 1000, "previous": null, "results": [VERSIONS], "next": "https://readthedocs.org/api/v2/version/?limit=10&offset=10" }
- Response JSON Object:
next (string) – URI for next set of Versions.
previous (string) – URI for previous set of Versions.
count (integer) – Total number of Versions.
results (array) – Array of
Version
objects.
- Query Parameters:
project__slug (string) – Narrow to the versions for a specific
Project
active (boolean) – Pass
true
orfalse
to show only active or inactive versions. By default, the API returns all versions.
Version detail
- GET /api/v2/version/(int: id)/
Retrieve details of a single version.
{ "id": 1437428, "slug": "stable", "verbose_name": "stable", "built": true, "active": true, "type": "tag", "identifier": "3a6b3995c141c0888af6591a59240ba5db7d9914", "privacy_level": "public", "downloads": { "pdf": "//readthedocs.org/projects/pip/downloads/pdf/stable/", "htmlzip": "//readthedocs.org/projects/pip/downloads/htmlzip/stable/", "epub": "//readthedocs.org/projects/pip/downloads/epub/stable/" }, "project": {PROJECT}, }
- Response JSON Object:
id (integer) – The ID of the version
verbose_name (string) – The name of the version.
slug (string) – The version slug.
built (string) – Whether this version has been built
active (string) – Whether this version is still active
type (string) – The type of this version (typically “tag” or “branch”)
identifier (string) – A version control identifier for this version (eg. the commit hash of the tag)
downloads (array) – URLs to downloads of this version’s documentation
project (object) – Details of the
Project
for this version.
- Status Codes:
200 OK – no error
404 Not Found – There is no
Version
with this ID
Builds
Builds are created by Read the Docs whenever a Project
has its documentation built.
Frequently this happens automatically via a web hook but can be triggered manually.
Builds can be viewed in the build screen for a project. For example, here is Pip’s build screen.
Build list
- GET /api/v2/build/
Retrieve details of builds ordered by most recent first
Example request:
curl https://readthedocs.org/api/v2/build/?project__slug=pip
Example response:
{ "count": 100, "next": null, "previous": null, "results": [BUILDS] }
- Response JSON Object:
next (string) – URI for next set of Builds.
previous (string) – URI for previous set of Builds.
count (integer) – Total number of Builds.
results (array) – Array of
Build
objects.
- Query Parameters:
project__slug (string) – Narrow to builds for a specific
Project
commit (string) – Narrow to builds for a specific
commit
Build detail
- GET /api/v2/build/(int: id)/
Retrieve details of a single build.
{ "id": 7367364, "date": "2018-06-19T15:15:59.135894", "length": 59, "type": "html", "state": "finished", "success": true, "error": "", "commit": "6f808d743fd6f6907ad3e2e969c88a549e76db30", "docs_url": "http://pip.pypa.io/en/latest/", "project": 13, "project_slug": "pip", "version": 3681, "version_slug": "latest", "commands": [ { "description": "", "start_time": "2018-06-19T20:16:00.951959", "exit_code": 0, "build": 7367364, "command": "git remote set-url origin git://github.com/pypa/pip.git", "run_time": 0, "output": "", "id": 42852216, "end_time": "2018-06-19T20:16:00.969170" }, ... ], ... }
- Response JSON Object:
id (integer) – The ID of the build
date (string) – The ISO-8601 datetime of the build.
length (integer) – The length of the build in seconds.
type (string) – The type of the build (one of “html”, “pdf”, “epub”)
state (string) – The state of the build (one of “triggered”, “building”, “installing”, “cloning”, or “finished”)
success (boolean) – Whether the build was successful
error (string) – An error message if the build was unsuccessful
commit (string) – A version control identifier for this build (eg. the commit hash)
docs_url (string) – The canonical URL of the build docs
project (integer) – The ID of the project being built
project_slug (string) – The slug for the project being built
version (integer) – The ID of the version of the project being built
version_slug (string) – The slug for the version of the project being built
commands (array) – Array of commands for the build with details including output.
- Status Codes:
200 OK – no error
404 Not Found – There is no
Build
with this ID
Some fields primarily used for UI elements in Read the Docs are omitted.
Embed
- GET /api/v2/embed/
Retrieve HTML-formatted content from documentation page or section.
Warning
The content will be returned as is, without any sanitization or escaping. You should not include content from arbitrary projects, or projects you do not trust.
Example request:
curl https://readthedocs.org/api/v2/embed/?project=docs&version=latest&doc=features&path=features.html
or
curl https://readthedocs.org/api/v2/embed/?url=https://docs.readthedocs.io/en/latest/features.html
Example response:
{ "content": [ "<div class=\"section\" id=\"read-the-docs-features\">\n<h1>Read the Docs..." ], "headers": [ { "Read the Docs features": "#" }, { "Automatic Documentation Deployment": "#automatic-documentation-deployment" }, { "Custom Domains & White Labeling": "#custom-domains-white-labeling" }, { "Versioned Documentation": "#versioned-documentation" }, { "Downloadable Documentation": "#downloadable-documentation" }, { "Full-Text Search": "#full-text-search" }, { "Open Source and Customer Focused": "#open-source-and-customer-focused" } ], "url": "https://docs.readthedocs.io/en/latest/features", "meta": { "project": "docs", "version": "latest", "doc": "features", "section": "read the docs features" } }
- Response JSON Object:
content (string) – HTML content of the section.
headers (object) – section’s headers in the document.
url (string) – URL of the document.
meta (object) – meta data of the requested section.
- Query Parameters:
project (string) – Read the Docs project’s slug.
doc (string) – document to fetch content from.
version (string) – optional Read the Docs version’s slug (default:
latest
).section (string) – optional section within the document to fetch.
path (string) – optional full path to the document including extension.
url (string) – full URL of the document (and section) to fetch content from.
Note
You can call this endpoint by sending at least
project
anddoc
orurl
attribute.
Undocumented resources and endpoints
There are some undocumented endpoints in the API. These should not be used and could change at any time. These include:
Endpoints for returning footer and version data to be injected into docs. (
/api/v2/footer_html
)Endpoints used for advertising (
/api/v2/sustainability/
)Any other endpoints not detailed above.
Server side search API
You can integrate our server side search in your documentation by using our API.
If you are using Business hosting you will need to replace https://readthedocs.org/ with https://readthedocs.com/ in all the URLs used in the following examples. Check Authentication and authorization if you are using private versions.
API v3
- GET /api/v3/search/
Return a list of search results for a project or subset of projects. Results are divided into sections with highlights of the matching term.
- Query Parameters:
q – Search query (see Search query syntax)
page – Jump to a specific page
page_size – Limits the results per page, default is 50
- Response JSON Object:
type (string) – The type of the result, currently page is the only type.
project (string) – The project object
version (string) – The version object
title (string) – The title of the page
domain (string) – Canonical domain of the resulting page
path (string) – Path to the resulting page
highlights (object) – An object containing a list of substrings with matching terms. Note that the text is HTML escaped with the matching terms inside a
<span>
tag.blocks (object) –
A list of block objects containing search results from the page. Currently, there is one type of block:
section: A page section with a linkable anchor (
id
attribute).
Warning
Except for highlights, any other content is not HTML escaped, you shouldn’t include it in your page without escaping it first.
Example request:
$ curl "https://readthedocs.org/api/v3/search/?q=project:docs%20server%20side%20search"
import requests URL = 'https://readthedocs.org/api/v3/search/' params = { 'q': 'project:docs server side search', } response = requests.get(URL, params=params) print(response.json())
Example response:
{ "count": 41, "next": "https://readthedocs.org/api/v3/search/?page=2&q=project:docs%20server+side+search", "previous": null, "projects": [ { "slug": "docs", "versions": [ {"slug": "latest"} ] } ], "query": "server side search", "results": [ { "type": "page", "project": { "slug": "docs", "alias": null }, "version": { "slug": "latest" }, "title": "Server Side Search", "domain": "https://docs.readthedocs.io", "path": "/en/latest/server-side-search.html", "highlights": { "title": [ "<span>Server</span> <span>Side</span> <span>Search</span>" ] }, "blocks": [ { "type": "section", "id": "server-side-search", "title": "Server Side Search", "content": "Read the Docs provides full-text search across all of the pages of all projects, this is powered by Elasticsearch.", "highlights": { "title": [ "<span>Server</span> <span>Side</span> <span>Search</span>" ], "content": [ "You can <span>search</span> all projects at https://readthedocs.org/<span>search</span>/" ] } }, { "type": "domain", "role": "http:get", "name": "/_/api/v2/search/", "id": "get--_-api-v2-search-", "content": "Retrieve search results for docs", "highlights": { "name": [""], "content": ["Retrieve <span>search</span> results for docs"] } } ] }, ] }
Migrating from API v2
Instead of using query arguments to specify the project and version to search, you need to do it from the query itself, this is if you had the following parameters:
project: docs
version: latest
q: test
Now you need to use:
q: project:docs/latest test
The response of the API is very similar to V2, with the following changes:
project
is an object, not a string.version
is an object, not a string.project_alias
isn’t present, it is contained in theproject
object.
When searching on a parent project,
results from their subprojects won’t be included automatically,
to include results from subprojects use the subprojects
paramater.
API v2 (deprecated)
Note
Please use our API v3 instead, see Migrating from API v2.
- GET /api/v2/search/
Return a list of search results for a project, including results from its Subprojects. Results are divided into sections with highlights of the matching term.
- Query Parameters:
q – Search query
project – Project slug
version – Version slug
page – Jump to a specific page
page_size – Limits the results per page, default is 50
- Response JSON Object:
type (string) – The type of the result, currently page is the only type.
project (string) – The project slug
project_alias (string) – Alias of the project if it’s a subproject.
version (string) – The version slug
title (string) – The title of the page
domain (string) – Canonical domain of the resulting page
path (string) – Path to the resulting page
highlights (object) – An object containing a list of substrings with matching terms. Note that the text is HTML escaped with the matching terms inside a
<span>
tag.blocks (object) –
A list of block objects containing search results from the page. Currently, there is one type of block:
section: A page section with a linkable anchor (
id
attribute).
Warning
Except for highlights, any other content is not HTML escaped, you shouldn’t include it in your page without escaping it first.
Example request:
$ curl "https://readthedocs.org/api/v2/search/?project=docs&version=latest&q=server%20side%20search"
import requests URL = 'https://readthedocs.org/api/v2/search/' params = { 'q': 'server side search', 'project': 'docs', 'version': 'latest', } response = requests.get(URL, params=params) print(response.json())
Example response:
{ "count": 41, "next": "https://readthedocs.org/api/v2/search/?page=2&project=read-the-docs&q=server+side+search&version=latest", "previous": null, "results": [ { "type": "page", "project": "docs", "project_alias": null, "version": "latest", "title": "Server Side Search", "domain": "https://docs.readthedocs.io", "path": "/en/latest/server-side-search.html", "highlights": { "title": [ "<span>Server</span> <span>Side</span> <span>Search</span>" ] }, "blocks": [ { "type": "section", "id": "server-side-search", "title": "Server Side Search", "content": "Read the Docs provides full-text search across all of the pages of all projects, this is powered by Elasticsearch.", "highlights": { "title": [ "<span>Server</span> <span>Side</span> <span>Search</span>" ], "content": [ "You can <span>search</span> all projects at https://readthedocs.org/<span>search</span>/" ] } } ] }, ] }
Cross-site requests
Cross site requests are allowed for the following endpoints:
Except for the sustainability API, all of the above endpoints don’t allow you to pass credentials in cross-site requests. In other words, these API endpoints allow you to access public information only.
On a technical level, this is achieved by implementing the CORS standard, which is supported by all major browsers. We implement it such way that it strictly match the intention of the API endpoint.
Frequently asked questions
Building and publishing your project
Why does my project have status “failing”?
Projects have the status “failing” because something in the build process has failed. This can be because the project is not correctly configured, because the contents of the Git repository cannot be built, or in the most rare cases because a system that Read the Docs connects to is not working.
First, you should check out the Builds tab of your project. By clicking on the failing step, you will be able to see details that can lead to resolutions to your build error.
If the solution is not self-evident, you can use an important word or message from the error to search for a solution.
See also
- Troubleshooting build errors
Common errors and solutions for build failures.
- Other FAQ entries
Why do I get import errors from libraries depending on C modules?
Note
Another use case for this is when you have a module with a C extension.
This happens because the build system does not have the dependencies for
building your project, such as C libraries needed by some Python packages (e.g.
libevent
or mysql
). For libraries that cannot be installed via apt in the builder there is another way to
successfully build the documentation despite missing dependencies.
With Sphinx you can use the built-in autodoc_mock_imports for mocking. If
such libraries are installed via setup.py
, you also will need to remove all
the C-dependent libraries from your install_requires
in the Read the Docs environment.
Where do I need to put my docs for Read the Docs to find it?
You can put your docs wherever your want on your repository.
However, you will need to tell Read the Docs where your Sphinx’s (i.e. conf.py
)
or MkDocs’ (i.e. mkdocs.yml
) configuration file lives in order to build your documentation.
This is done by using sphinx.configuration
or mkdocs.configuration
config key in your Read the Docs configuration file.
Read Configuration file overview to know more about this.
How can I avoid search results having a deprecated version of my docs?
If readers search something related to your docs in Google, it will probably return the most relevant version of your documentation. It may happen that this version is already deprecated and you want to stop Google indexing it as a result, and start suggesting the latest (or newer) one.
To accomplish this, you can add a robots.txt
file to your documentation’s root so it ends up served at the root URL of your project
(for example, https://yourproject.readthedocs.io/robots.txt).
We have documented how to set this up in robots.txt support.
How do I change the version slug of my project?
We don’t support allowing folks to change the slug for their versions. But you can rename the branch/tag to achieve this. If that isn’t enough, you can request the change sending an email to support@readthedocs.org.
What commit of Read the Docs is in production?
We deploy readthedocs.org from the rel
branch in our GitHub repository.
You can see the latest commits that have been deployed by looking on GitHub: https://github.com/readthedocs/readthedocs.org/commits/rel
We also keep an up-to-date changelog.
Additional features and configuration
How do I add additional software dependencies for my documentation?
For most Python dependencies, you can can specify a requirements file which details your dependencies. You can also set your project documentation to install your Python project itself as a dependency.
See also
- Build process overview
An overview of the build process.
- How to create reproducible builds
General information about adding dependencies and best-practices for maintaining them.
- Build process customization
How to customize your builds, for example if you need to build with different tools from Sphinx or if you need to add additional packages for the Ubuntu-based builder.
- Configuration file reference
Reference for the main configuration file,
readthedocs.yaml
- build.apt_packages
Reference for adding Debian packages with apt for the Ubuntu-based builders
- Other FAQ entries
How do I change behavior when building with Read the Docs?
When Read the Docs builds your project, it sets the READTHEDOCS
environment
variable to the string 'True'
. So within your Sphinx conf.py
file, you
can vary the behavior based on this. For example:
import os
on_rtd = os.environ.get("READTHEDOCS") == "True"
if on_rtd:
html_theme = "default"
else:
html_theme = "nature"
The READTHEDOCS
variable is also available in the Sphinx build
environment, and will be set to True
when building on Read the Docs:
{% if READTHEDOCS %}
Woo
{% endif %}
I want comments in my docs
Read the Docs doesn’t have explicit support for this. That said, a tool like Disqus (and the sphinxcontrib-disqus plugin) can be used for this purpose on Read the Docs.
Can I remove advertising from my documentation?
Yes. See Opting out of advertising.
How do I change my project slug (the URL your docs are served at)?
We don’t support allowing folks to change the slug for their project. You can update the name which is shown on the site, but not the actual URL that documentation is served.
The main reason for this is that all existing URLs to the content will break. You can delete and re-create the project with the proper name to get a new slug, but you really shouldn’t do this if you have existing inbound links, as it breaks the internet.
If that isn’t enough, you can request the change sending an email to support@readthedocs.org.
Big projects
How do I host multiple projects on one custom domain?
We support the concept of subprojects, which allows multiple projects to share a single domain. If you add a subproject to a project, that documentation will be served under the parent project’s subdomain or custom domain.
For example,
Kombu is a subproject of Celery,
so you can access it on the celery.readthedocs.io
domain:
https://celery.readthedocs.io/projects/kombu/en/latest/
This also works the same for custom domains:
http://docs.celeryq.dev/projects/kombu/en/latest/
You can add subprojects in the project admin dashboard.
For details on custom domains, see our documentation on Custom domains.
How do I support multiple languages of documentation?
Read the Docs supports multiple languages. See the section on Localization and Internationalization.
Sphinx
I want to use the Read the Docs theme
To use the Read the Docs theme,
you have to specify that in your Sphinx’s conf.py
file.
Read the sphinx-rtd-theme documentation for instructions to enable it in your Sphinx project.
Image scaling doesn’t work in my documentation
Image scaling in docutils
depends on Pillow
.
If you notice that image scaling is not working properly on your Sphinx project,
you may need to add Pillow
to your requirements to fix this issue.
Read more about How to create reproducible builds to define your dependencies in a requirements.txt
file.
Python
Can I document a Python package that is not at the root of my repository?
Yes. The most convenient way to access a Python package for example via
Sphinx’s autoapi in your documentation is to use the
python.install.method: pip
(python.install) configuration key.
This configuration will tell Read the Docs to install your package in the virtual environment used to build your documentation so your documentation tool can access to it.
Does Read the Docs work well with “legible” docstrings?
Yes. One criticism of Sphinx is that its annotated docstrings are too
dense and difficult for humans to read. In response, many projects
have adopted customized docstring styles that are simultaneously
informative and legible. The
NumPy
and
Google
styles are two popular docstring formats. Fortunately, the default
Read the Docs theme handles both formats just fine, provided
your conf.py
specifies an appropriate Sphinx extension that
knows how to convert your customized docstrings. Two such extensions
are numpydoc and
napoleon. Only
napoleon
is able to handle both docstring formats. Its default
output more closely matches the format of standard Sphinx annotations,
and as a result, it tends to look a bit better with the default theme.
Note
To use these extensions you need to specify the dependencies on your project by following this guide.
I need to install a package in a environment with pinned versions
If you’d like to pin your dependencies outside the package, you can add this line to your requirements or environment file (if you are using Conda).
In your requirements.txt
file:
# path to the directory containing setup.py relative to the project root
-e .
In your Conda environment file (environment.yml
):
# path to the directory containing setup.py relative to the environment file
-e ..
Other documentation frameworks
How can I deploy Jupyter Book projects on Read the Docs?
According to its own documentation,
Jupyter Book is an open source project for building beautiful, publication-quality books and documents from computational material.
Even though Jupyter Book leverages Sphinx “for almost everything that it
does”,
it purposedly hides Sphinx conf.py
files from the user,
and instead generates them on the fly from its declarative _config.yml
.
As a result, you need to follow some extra steps
to make Jupyter Book work on Read the Docs.
As described in the official documentation, you can manually convert your Jupyter Book project to Sphinx with the following configuration:
build:
jobs:
pre_build:
# Generate the Sphinx configuration for this Jupyter Book so it builds.
- "jupyter-book config sphinx docs/"
Changelog
Version 10.24.0
- Date:
April 16, 2024
@hoyes: Dev: Allow Minio to be used without debug mode (#11272)
@ericholscher: Release 10.23.2 (#11269)
@agjohnson: Add error view for error handling and error view testing (#11263)
@humitos: Build: remove
append_conf
_magic_ from MkDocs (#11206)
Version 10.23.2
- Date:
April 09, 2024
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11267)
@stsewd: Redirects: fix root redirect (/ -> <anything>) (#11265)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11251)
@humitos: Build: mark build as CANCELLED when command exits with 183 (#11240)
@stsewd: Organizations: take into account the user when listing members (#11212)
Version 10.23.1
- Date:
March 26, 2024
@humitos: Build: mark build as CANCELLED when command exits with 183 (#11240)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11237)
@humitos: APIv3: add
state__in
filter for Notifications (#11234)@ericholscher: Fully roll out stickybox (#11230)
@ericholscher: Release 10.23.0 (#11229)
@humitos: Proxito: define dummy dashboard URLs for addons serializers (#11227)
@stsewd: Organizations: take into account the user when listing members (#11212)
Version 10.23.0
- Date:
March 19, 2024
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11224)
@agjohnson: Fix bugs with support form (#11222)
@zliang-akamai: Fix Read the Docs config file name in notifications (#11221)
@humitos: Build: always reset the build before building (#11213)
@agjohnson: Add build detail view beta notification (#11208)
@humitos: Addons: allow users to define
root_selector
from the WebUI (#11181)@humitos: Addons: sorting algorithm for versions customizable on flyout (#11069)
Version 10.22.0
- Date:
March 12, 2024
@agjohnson: Add build detail view beta notification (#11208)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11203)
@humitos: Revert “Notifications: show “Maxium concurrency limit reached” as
WARNING
” (#11202)@humitos: Notifications: de-duplicate them when using APIv2 from builders (#11197)
@humitos: Notifications: show “Maxium concurrency limit reached” as
WARNING
(#11196)@agjohnson: Allow setting Allauth provider secrets from host system (#11194)
@humitos: Support: create a form to render it nicely in ext-theme (#11193)
@humitos: Notification: fix
choices
rendering forINVALID_CHOICE
(#11190)@ericholscher: Release 10.21.0 (#11185)
@stsewd: Project: force PR previews to match repo only if the repo is public (#11184)
@humitos: Addons: allow users to define
root_selector
from the WebUI (#11181)@ericholscher: Init path to ensure it exists (#11178)
@stsewd: Project: build both default and latest version when saving the project form (#11177)
@humitos: Build: show the YAML config file before validating it (#11175)
@stsewd: Allow override SOCIALACCOUNT_PROVIDERS from ops (#11165)
@humitos: Lint: run
black
against all our Python files (#11145)@humitos: Addons: sorting algorithm for versions customizable on flyout (#11069)
Version 10.21.0
- Date:
March 04, 2024
@stsewd: Project: force PR previews to match repo only if the repo is public (#11184)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11180)
@ericholscher: Init path to ensure it exists (#11178)
@stsewd: Project: build both default and latest version when saving the project form (#11177)
@humitos: Build: show the YAML config file before validating it (#11175)
@humitos: Notification: make the OAuth one dismissable (#11172)
@humitos: Build: set CANCELLED state when the build is cancelled (#11171)
@humitos: Admin: remove temporal opt-out email settings (#11164)
@humitos: New dashboard: notification to point users there (#11161)
@stsewd: Allauth: Include Bitbucket in the list of social accounts (#11160)
@hoyes: Dev: Default RTD_DJANGO_DEBUG to False if not set (#11154)
@humitos: Build: bugfix to show build notifications (#11153)
@ewjoachim: Fix Poetry instructions (#11152)
@humitos: VCS: deprecation dates at application level (#11147)
@humitos: Notifications: allow dismiss user’s notifications (#11130)
@stsewd: Match login template with changes from .com (#11101)
@humitos: Addons + Proxito: return
X-RTD-Resolver-Filename
and inject via CF (#11100)
Version 10.20.0
- Date:
February 27, 2024
@humitos: APIv3: add
_links.notifications
toProject
resource (#11155)@hoyes: Dev: Default RTD_DJANGO_DEBUG to False if not set (#11154)
@humitos: Build: bugfix to show build notifications (#11153)
@ewjoachim: Fix Poetry instructions (#11152)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11148)
@humitos: VCS: deprecation dates at application level (#11147)
@taylorhummon: fix highlighting of “fail_on_warning: true” in tutorial (#11144)
@ericholscher: Refactor the index page to match the sidebar (#11141)
@ericholscher: Refactor documentation navigation (#11139)
@agjohnson: Add missing context variable (#11135)
@humitos: Notifications: allow dismiss user’s notifications (#11130)
@humitos: Addons: add model history on AddonsConfig (#11127)
@humitos: Addons + Proxito: return
X-RTD-Resolver-Filename
and inject via CF (#11100)@arti-bol: Added a troubleshooting section for webhook (#11099)
Version 10.19.0
- Date:
February 20, 2024
@humitos: Addons: add model history on AddonsConfig (#11127)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11122)
@humitos: Notifications: show them based on permissions (#11117)
@saadmk11: API V3: Only return notifications for a given organization (#11112)
@humitos: Docs: build documentation with social cards (#11109)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11108)
@humitos: Build: check for pre-compiled
build.tools
when usingubuntu-lts-latest
(#11098)@agjohnson: Use form validation errors for important UI feedback (#11095)
@agjohnson: Some fixes for notifications (#11094)
@dependabot[bot]: Bump peter-evans/create-pull-request from 5 to 6 (#11092)
@stsewd: Integrations: Don’t allow webhooks without a secret (#11083)
@humitos: Development: use
wrangler
locally (update NGINX/Dockerfile config) (#10965)
Version 10.18.0
- Date:
February 06, 2024
@dependabot[bot]: Bump peter-evans/create-pull-request from 5 to 6 (#11092)
@man-chi: add example list for showing basic asciidoc using Antora (#11091)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11090)
@stsewd: Use html_format instead of mark_safe + format (#11086)
@stsewd: Build: use version slug for get_version_slug (#11085)
@stsewd: Integrations: Don’t allow webhooks without a secret (#11083)
@stsewd: Config file: add support for latest aliases (#11081)
@stsewd: Docs: clarify search configuration patterns (#11076)
@humitos: Make Sphinx to share environment between commands (#11073)
@ericholscher: Fix provier_name in notification template (#11066)
@humitos: Build: don’t attach notification when build failed
before_start
(#11057)@humitos: Notification: create an index for
attached_to
(#11050)@ericholscher: Release 10.15.1 (#11034)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10860)
@humitos: Deprecation: remove code for config file v1 and default config file (#10367)
@benjaoming: Docs: Re-scope Intersphinx article as a how-to (#9622)
Version 10.17.0
- Date:
January 30, 2024
@humitos: Make Sphinx to share environment between commands (#11073)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11070)
@stsewd: Integrations: add created and updated fields to model (#11067)
@ericholscher: Fix provier_name in notification template (#11066)
@stsewd: Analytics: don’t record page views for PR previews (#11065)
@stsewd: Custom domain: don’t allow external domain (#11064)
@humitos: Notifications: improve copy on error messages (#11062)
@stsewd: Embed API: fix regex patterns for allowed external domains (#11059)
@stsewd: Redirects: check if path is None and fix merge of query params (#11058)
@humitos: Build: don’t attach notification when build failed
before_start
(#11057)@stsewd: Docs: move warning from embed API to the top (#11053)
@humitos: APIv3: bring back
OrganizationsViewSet
that was removed (#11052)@humitos: Notification: create an index for
attached_to
(#11050)@humitos: Notification: cancel notifications automatically (#11048)
Version 10.16.1
- Date:
January 23, 2024
Version 10.16.0
- Date:
January 23, 2024
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11046)
@stsewd: Expose
assert_path_is_inside_docroot
function (#11045)@humitos: Config: allow missing
conda.environment
when usingbuild.commands
(#11040)@ericholscher: Release 10.15.1 (#11034)
@humitos: Addons: update form to show all the options (#11031)
@humitos: Config: better validation error for
conda.environment
(#10979)
Version 10.15.1
- Date:
January 16, 2024
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11029)
@humitos: Build: reset notifications when reseting a build (#11027)
@humitos: Development: define
SUPPORT_EMAIL
setting (#11026)@humitos: Notifications: use Template’s Django engine to render them (#11024)
@humitos: Notifications: render
Organization
notifications on details page (#11023)@humitos: Notifications: save
format_values
whenon_retry
exception (#11020)@humitos: Notifications: initialize exception properly (#11019)
@humitos: Notifications: use
instance.slug
instead ofinstance.name
(#11018)@humitos: Black: run black over all the code base (Part 2) (#11013)
@humitos: Notifications: small fixes found after reviewer (#10996)
@humitos: Config: better validation error for
conda.environment
(#10979)
Version 10.15.0
- Date:
January 09, 2024
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#11005)
@ericholscher: Fix structlog by downgrading it (#11003)
@webknjaz: Fix ref to the “new addons integrations” blog post @ custom build doc (#10997)
@humitos: Notifications: small fixes found after reviewer (#10996)
@humitos: Remove leftovers from
django-messages-extends
(#10994)@stsewd: Integrations: hardcode deprecation date for incoming webhooks without a secret (#10993)
@stsewd: Development: update steps for testing subscriptions (#10992)
@stsewd: Redirects: remove null option from position field (#10991)
@ericholscher: Release 10.14.0 (#10989)
@humitos: Addons: get translation from main project (#10952)
@stsewd: Custom domains: don’t allow adding a custom domain on subprojects (#8953)
Version 10.14.0
- Date:
January 03, 2024
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10977)
@basnijholt: Fix YAML indentation in example
readthedocs.yaml
(#10970)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10969)
@agjohnson: Allow override of env settings from host (#10959)
@humitos: Addons: get translation from main project (#10952)
@dependabot[bot]: Bump actions/setup-python from 4 to 5 (#10950)
@stsewd: Search: fix default for search.ranking when indexing (#10945)
@ericholscher: Release 10.12.2 (#10944)
Version 10.13.0
- Date:
December 19, 2023
@agjohnson: Allow override of env settings from host (#10959)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10957)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10949)
@agjohnson: Allow empty project list on organization team form (#10947)
@ericholscher: Release 10.12.2 (#10944)
@ericholscher: Add AddonsConfig admin (#10938)
Version 10.12.2
- Date:
December 05, 2023
Version 10.12.1
- Date:
November 28, 2023
Version 10.12.0
- Date:
November 28, 2023
@kurtmckee: Fix a typo in the word ‘Sphinx’ (#10926)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10925)
@humitos: Feature flag: remove
CDN_ENABLED
which is not used anymore (#10921)@humitos: Design: small update to add a PATCH endpoint (#10919)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10909)
@stsewd: Integrations: always show secret and show warning if secret is not set (#10908)
@stsewd: Integrations: better error message for integrations without a secret (#10903)
@ericholscher: Release 10.11.0 (#10900)
@ericholscher: Mention custom sitemap via robots.txt (#10899)
@stsewd: Project: use a choicefield for selecting the versioning scheme (#10845)
@nikblanchet: Docs: Configuration file how-to guide (#10301)
@humitos: Build: expose VCS-related environment variables (#10168)
Version 10.11.0
- Date:
November 14, 2023
Version 10.10.0
- Date:
November 07, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10889)
@ericholscher: Make Ubuntu callout correct (#10883)
@ericholscher: Release 10.9.0 (#10880)
@stsewd: Resolver: use new methods to resolve documentation pages (#10875)
@humitos: Resolver: don’t use one global instance and implement caching (#10872)
@agjohnson: Add organization view UI filters (#10847)
@stsewd: Redirects (design doc): improving existing functionality (#10825)
Version 10.9.0
- Date:
October 31, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10876)
@stsewd: Resolver: use new methods to resolve documentation pages (#10875)
@humitos: Addons: improve DB query for
projects_feature
table (#10871)@humitos: NGINX: inject the proper
readthedocs-version-slug
(#10870)@stsewd: Unresolver: remove old language code compatibility (#10869)
@stsewd: Config file: remove deprecated keys from json schema (#10867)
@humitos: DB: create an index for
builds_build
table to improve Addons API (#10840)@stsewd: Redirects (design doc): improving existing functionality (#10825)
@humitos: Addons: accept
project-slug
andversion-slug
on endpoint (#10823)@stephenfin: docs: Document how to fetch additional branches (#10795)
Version 10.8.1
- Date:
October 24, 2023
Version 10.8.0
- Date:
October 24, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10860)
@mathbunnyru: Docs: fix formatting of commented configuration example (#10858)
@stsewd: Docs: update docs about default dependencies (#10851)
@stsewd: Revert “Build (python): default 3 to 3.11” (#10846)
@stsewd: Build: install compatible version of virtualenv in images (#10844)
@ericholscher: Keep Ad Customization in the docs (#10843)
@humitos: DB: create an index for
builds_build
table to improve Addons API (#10840)@stsewd: Build: don’t pre-install pip and setuptools in images (#10834)
@humitos: Addons: expand db query to get the
type
as well (#10829)@ericholscher: Release 10.7.1 (#10827)
@humitos: Addons: accept
project-slug
andversion-slug
on endpoint (#10823)@ericholscher: Clarify admin permission (#10822)
@humitos: Addons: resolve versions/translations URLs properly (#10821)
@stsewd: Proxito: normalize code languages and redirect to them (#10750)
@humitos: Deprecation: remove code for config file v1 and default config file (#10367)
@nikblanchet: Docs: Configuration file how-to guide (#10301)
Version 10.7.1
- Date:
October 17, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10819)
@humitos: Logs: always log documentation size in megabytes (#10812)
@humitos: Docs: deprecated note about flyout integration/customization (#10810)
@A5rocks: Add Python 3.12 to allowed Python versions (#10808)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10807)
@stsewd: Proxito: always use subdomain for serving docs (#10799)
@ericholscher: Release 10.6.1 (#10792)
@ericholscher: Merge Diataxis into
main
! (#10034)@humitos: structlog: migrate application code to better logging (#8705)
Version 10.7.0
- Date:
October 10, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10807)
@stsewd: Webhooks: use PUBLIC_API_URL to generate the webhook URL (#10801)
@ericholscher: Release 10.6.1 (#10792)
@stsewd: Proxito: remove one query from the middleware (#10788)
@stsewd: BuildAPIKey: remove old revoked/expired keys (#10778)
@humitos: VCS: remove unused methods and make new Git pattern the default (#8968)
Version 10.6.1
- Date:
October 03, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10784)
@stsewd: BuildAPIKey: remove old revoked/expired keys (#10778)
@stsewd: SSO: update Google Workspace integration docs (#10774)
@humitos: Docs: update example for AsciiDoc to simplify it a little (#10763)
@humitos: Build: remove support for MkDocs <= 0.17.3 (#10584)
@humitos: Deprecation: remove support for Sphinx 1.x (#10365)
@stsewd: SearchQuery: use BigAutoField for primary key (#9671)
Version 10.6.0
- Date:
September 26, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10768)
@humitos: Docs: update example for AsciiDoc to simplify it a little (#10763)
@stsewd: Proxyto: Allow CORS on commercial on public docs pages (#10762)
@humitos: APIv3: return
single_version
field onProject
resource (#10758)@stsewd: Build indexing: fix indexing of external versions (#10756)
@humitos: Addons: move the HTTP header to a GET parameter (#10753)
@ericholscher: Release 10.5.0 (#10749)
@humitos: Addons: get final project (e.g.
subproject
) (#10745)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10744)
@ericholscher: Change frequency of pageview delete code (#10739)
@humitos: Proxito: add CORS headers only on public versions (#10737)
@humitos: Addons: allow users to opt-in into the beta addons (#10733)
@agjohnson: Custom domain doc improvements (#10719)
@humitos: API: return the
api_version
on the response (#10276)
Version 10.5.0
- Date:
September 18, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10744)
@ericholscher: Change frequency of pageview delete code (#10739)
@humitos: Proxito: add CORS headers only on public versions (#10737)
@humitos: Addons: allow users to opt-in into the beta addons (#10733)
@humitos: Docs: review and update the whole FAQ page. (#10732)
@humitos: Docs: make
sphinx.configuration
in the tutorial (#10718)@humitos: Requirements: revert upgrade to
psycopg==3.x
(#10713)@humitos: FooterAPI: use
AdminPermission
when working with object users (#10709)@stsewd: Search: stop relying on the DB when indexing (#10696)
Version 10.4.0
- Date:
September 12, 2023
@dependabot[bot]: Bump actions/checkout from 3 to 4 (#10724)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10723)
@humitos: Requirements: revert upgrade to
psycopg==3.x
(#10713)@atugushev: Fix blog post URL (#10712)
@humitos: Addons: allow to be extended by corporate (#10705)
@humitos: Addons: add
CDN-Tags
to endpoint and auto-purge cache (#10704)@stsewd: Footer API: include current user in queryset (#10695)
@agjohnson: Black pass number 2 (#10693)
Version 10.3.0
- Date:
September 05, 2023
@humitos: Addons: allow to be extended by corporate (#10705)
@humitos: Addons: add
CDN-Tags
to endpoint and auto-purge cache (#10704)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10700)
@stsewd: Footer API: include current user in queryset (#10695)
@agjohnson: Black pass number 2 (#10693)
@stsewd: Delete imported files when deactivating version (#10684)
@humitos: Addons: prepare the backend for the new flyout (#10650)
@humitos: Deprecation: remove “use system packages” (
python.system_packages
config key and UI checkbox) (#10562)@agjohnson: Add beta version of doc diff library for testing (#9546)
Version 10.2.0
- Date:
August 29, 2023
@humitos: Docs: update
conda
config key to mentionbuild.tools.python
(#10672)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10669)
@stsewd: CI: Add requirements/testing.txt to pre-commit cache key (#10658)
@stsewd: Set
SECURE_PROXY_SSL_HEADER
when using docker compose (#10657)@humitos: Tests: Update Sphinx test matrix for EmbedAPI (#10655)
@ecormany: docs: typo fix on “Custom and built-in redirects” page (#10651)
@humitos: Addons: prepare the backend for the new flyout (#10650)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10649)
@humitos: Build: do not set
sphinx_rtd_theme
theme automatically (#10638)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10628)
@humitos: Deprecation: remove “use system packages” (
python.system_packages
config key and UI checkbox) (#10562)@humitos: Feature flag: remove UPDATE_CONDA_STARTUP (#10494)
@saadmk11: Stop creating a conf.py automatically and doing magic around README handling (#5609)
Version 10.1.0
- Date:
August 22, 2023
@ecormany: docs: typo fix on “Custom and built-in redirects” page (#10651)
@humitos: Build: drop websupport2 support from conf.py template (#10646)
@ericholscher: Remove the celery email tasks until we can debug them. (#10641)
@humitos: Development: disable cached Loader on
DEBUG=True
(#10640)@humitos: Docs: update tutorial with the latest required changes (#10639)
@humitos: Build: do not set
sphinx_rtd_theme
theme automatically (#10638)@stsewd: Proxito: allow to generate proxied API URLs with a prefix (#10634)
@ericholscher: Small wording cleanup on Integration howto (#10632)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10628)
@humitos: Revert “Contact projects with a build in the last 3 years” (#10618)
@stsewd: Versions: keep type of version in sync with the project (#10606)
@humitos: Import: remove extra/advanced step from project import wizard (#10603)
@benjaoming: Docs: Methodology section (#10417)
@humitos: VCS: remove unused methods and make new Git pattern the default (#8968)
Version 10.0.0
This release is a Django 4.2 upgrade, so it has a major version bump, 10.0!
- Date:
August 14, 2023
@ericholscher: Update deprecation timezone to use PDT (#10631)
@stsewd: Custom domain: increase header value length (#10625)
@ericholscher: Use same HomepageView for Community & Business (#10621)
@stsewd: Revert “Proxito: test new implementation more broadly (#10599)” (#10614)
@humitos: Deprecation: codify browndates for “no config file deprecation” (#10612)
@humitos: Testing: run Coverage report only on CircleCI (#10611)
@humitos: Profile: redirect to
/accounts/edit/
view on successful edit (#10610)@stsewd: Admin: show creation/modification dates on the admin page (#10607)
@stsewd: Versions: keep type of version in sync with the project (#10606)
@stsewd: Proxito: test new implementation more broadly (#10599)
@stsewd: Build: replace GitPython with git commands (#10594)
@agjohnson: Add organization listing filter (#10593)
@humitos: Deprecation: notification and feature flag for
build.image
config (#10589)@stsewd: Subscriptions: use djstripe for products/features (#10238)
Version 9.16.4
- Date:
August 08, 2023
@stsewd: Proxito: test new implementation more broadly (#10599)
@agjohnson: Add organization listing filter (#10593)
@agjohnson: Add USE_ORGANIZATIONS context variablea (#10592)
@ericholscher: Release 9.16.3 (#10590)
@agjohnson: Update support page (#10580)
@humitos: Search: delete
sphinx_domains
Django app completely (#10574)@ericholscher: Add redirect to
about.readthedocs.com
for logged out users (#10570)@humitos: API: analytics return 400 when there is an error (#10240)
Version 9.16.3
- Date:
August 01, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10582)
@ericholscher: Uninstall sphinx_domains app to it’s models aren’t registered (#10578)
@ericholscher: Clarify forced redirects (#10577)
@humitos: Build tools: run
asdf version
from inside the container (#10575)@humitos: Build: add
mambaforge-22.09
as newer Python tool (#10572)@humitos: Development: install Docker and Docker Compose with official guides (#10561)
@humitos: Build: use
only-if-needed
pip’s strategy when installing package (#10560)@humitos: Docs: mention how to use
inv docker.compilebuildtool
(#10554)@humitos: Build: fail builds if there is no
index.html
in the output dir (#10550)@humitos: Telemetry: check for Sphinx config before use it (#10546)
@agjohnson: Fix bug with build filter (#10528)
@humitos: Version warning banner: disable it for project not using it already (#10483)
Version 9.16.2
- Date:
July 25, 2023
@humitos: Development: install Docker and Docker Compose with official guides (#10561)
@humitos: Build: use
only-if-needed
pip’s strategy when installing package (#10560)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10557)
@humitos: Build: use a setting to define the Docker image for the clone step (#10555)
@humitos: Docs: mention how to use
inv docker.compilebuildtool
(#10554)@humitos: API: add
?full_name=
icontains filter on RemoteRepository (#10551)@humitos: Telemetry: check for Sphinx config before use it (#10546)
@denisSurkov: Docs: Fix pinned term (#10545)
@humitos: Development: update docs to pull required images only (#10535)
@agjohnson: Add missing Version.external_version_name (#10529)
@agjohnson: Fix bug with build filter (#10528)
Version 9.16.1
- Date:
July 17, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10542)
@humitos: Development: update docs to pull required images only (#10535)
@humitos: Addons: return
ethicalads
data on/_/addons/
endpoint (#10534)@humitos: Celery: handle known exceptions on
delete_closed_external_versions
(#10532)@agjohnson: Add conditional logic to replace project version list view (#10530)
@agjohnson: Docs: swap around content for configuration files (#10517)
@humitos: Build: install all the latest Python “core requirements” (#10508)
@stsewd: Build API key: trim name to max allowed length (#10487)
@humitos: Deprecation: show the project slug/link correctly on email (#10432)
Version 9.16.0
- Date:
July 11, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10521)
@stsewd: Docs (dev): update server side search integration doc (#10518)
@stsewd: Search: use generic parser for MkDocs projects (#10516)
@humitos: MkDocs: fix
USE_MKDOCS_LATEST
feature flag logic (#10515)@humitos: Builds: set scale-in protection before/after each build (#10507)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10503)
@ericholscher: Reduce logging of common redirects and expected items (#10497)
@benjaoming: Test: Verify “cat .readthedocs.yaml” was called (#10495)
@humitos: Docs: update Conda to its latest available version (#10493)
@benjaoming: Tests: Mock revoking build API key (#10491)
@stephenfin: docs: Correct typo (#10489)
@stsewd: Build API key: don’t fetch and validate key twice (#10488)
@stsewd: Build API key: trim name to max allowed length (#10487)
@humitos: Docs: use
$READTHEDOCS_OUTPUT
variable in examples (#10486)@humitos: Version warning banner: disable it for project not using it already (#10483)
@benjaoming: Docs: Update example Sphinx .readthedocs.yaml (#10481)
@benjaoming: Images: Add tzdata as explicit requirement (#10480)
@benjaoming: CI: Use a cache for pre-commit (#10479)
@ericholscher: Release 9.15.0 (#10475)
@MSanKeys963: Docs: Typo fix for integrations.rst (#10474)
@humitos: Notification: expand management command to follow conventions (#10470)
@stsewd: API V2: Optimize /project/active_versions and /version/ endpoints (#10460)
@davidfischer: Update gold docs to reflect cross-site cookie reality (#10459)
@humitos: Addons: improve “active and built Versions” query (#10455)
@humitos: DB: do not fetch
data
and others when deleting rows (#10446)@benjaoming: Docs: Add “Git provider account connection” feature description (#10442)
@benjaoming: Dashboard: Update docs link (#10441)
@humitos: Deprecation: show the project slug/link correctly on email (#10432)
@benjaoming: Build: Simplify and optimize git backend: New clone+fetch pattern (#10430)
@humitos: Addons: handle API exceptions from unresolver (#10427)
@stsewd: Use project-scoped temporal tokens to interact with the API from the builders (#10378)
@EwoutH: Update patch versions and add new ones for all supported languages (#10217)
@humitos: Docs: mention
docsify
on “Build customization” (#9439)
Version 9.15.0
- Date:
June 26, 2023
@MSanKeys963: Docs: Typo fix for integrations.rst (#10474)
@humitos: Addons: improve db query when adding HTTP header from El Proxito (#10461)
@stsewd: API V2: Optimize /project/active_versions and /version/ endpoints (#10460)
@benjaoming: Docs: Replace navigation instructions with direct URLs w/ organization chooser (#10457)
@humitos: Addons: improve “active and built Versions” query (#10455)
@stsewd: API V3: add IsAuthenticated to permissions (#10452)
@stsewd: Search: stop creating SphinxDomain objects (#10451)
@stsewd: Unresolver: check for valid schemes when unresolving URL (#10450)
@stsewd: Proxito: easy migration to custom path prefixes (#10448)
@humitos: Addons: handle API exceptions from unresolver (#10427)
@humitos: Celery: increase frequency of
delete_closed_external_versions
task (#10425)@stsewd: Use project-scoped temporal tokens to interact with the API from the builders (#10378)
@EwoutH: Update patch versions and add new ones for all supported languages (#10217)
@humitos: Docs: mention
docsify
on “Build customization” (#9439)@davidfischer: Flyout and Footer API design document (#8052)
Version 9.14.0
- Date:
June 20, 2023
@stsewd: Test with explicit number of concurrent builds (#10444)
@benjaoming: Do not show paths in 404s (#10443)
@humitos: Deprecation: opt-out from config file email (#10440)
@humitos: Deprecation: send emails to “active projects” only (#10439)
@benjaoming: Docs: Add email template to report abandoned projects (#10435)
@rffontenelle: Update instructions for using transifex client tool (#10434)
@stsewd: CI: trigger circleci job on readthedocs-ext on merge (#10433)
@humitos: Deprecation: show the project slug/link correctly on email (#10432)
@ericholscher: Add the api_client into the sync_repo task (#10431)
@humitos: Analytics: create DB index on
PageView.date
(#10426)@humitos: Celery: increase frequency of
delete_closed_external_versions
task (#10425)@benjaoming: Docs: Configuration file pages as explanation and reference (Diátaxis) (#10416)
@ericholscher: Deprecation: send email notifications for config file v2 (#10415)
@ericholscher: Add a
cat
command and note in the build output when a config file is properly used. (#10413)@humitos: Build: fail builds without configuration file or using v1 (#10355)
@stsewd: Design doc: secure access to APIs from builders (#10289)
Version 9.13.3
- Date:
June 13, 2023
@humitos: GitHub Action: remove
team-reviewers
because it requires a GH-PAT (#10421)@ericholscher: Deprecation: send email notifications for config file v2 (#10415)
@humitos: Deprecation: improve Celery task db query (#10414)
@benjaoming: Docs: Add an “explanation index” (#10412)
@benjaoming: Docs: Correct title case for SEO occurrences (#10409)
@benjaoming: Docs: Add $READTHEDOCS_OUTPUT to environment variable reference (#10407)
@benjaoming: Bump sphinx-rtd-theme to 1.2.2 (#10400)
@agjohnson: Fix display issues with project creation config page (#10398)
@benjaoming: Docs: Split email notifications and webhook notifications into separate howtos (#10396)
@agjohnson: Fixes on Git providers (#10395)
@stsewd: Sphinx: don’t override html_context by default (#10394)
@benjaoming: Project: Add deprecation and removal warning to Advanced Settings (#10393)
@stsewd: Build: pass api_client down to environment/builders/etc (#10390)
@benjaoming: Docs: Add some messages flagging the upcoming requirement of a .readthedocs.yaml (#10389)
@benjaoming: Dev: invoke options –no-django-debug and –http-domain (#10384)
@benjaoming: Docs: Define ‘maintainer’ so we can reference it (#10381)
@benjaoming: Build: Bug in
target_url
, failure to add “success” status if no external version exists (#10369)@humitos: Project: suggest a simple config file on project import wizard (#10356)
@humitos: Config: deprecated notification for projects without config file (#10354)
@nikblanchet: Docs: Configuration file how-to guide (#10301)
Version 9.13.2
- Date:
June 06, 2023
@agjohnson: Try to bump up config file search in ranking (#10387)
@benjaoming: Dev: invoke options –no-django-debug and –http-domain (#10384)
@benjaoming: Doc: Remove broken reference (#10382)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10380)
@stsewd: Logs: remove caching without tags log warning (#10376)
@stsewd: Build: merge
BaseEnvironment
withBuildEnvironment
(#10375)@stsewd: Build: avoid breaking builds when a new argument is added to a task (#10374)
@benjaoming: Build: Bug in
target_url
, failure to add “success” status if no external version exists (#10369)@ericholscher: Release 9.13.1 (#10366)
@benjaoming: Small index page tweak (#10358)
@humitos: Project: suggest a simple config file on project import wizard (#10356)
@humitos: Config: deprecated notification for projects without config file (#10354)
Version 9.13.1
- Date:
May 30, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10362)
@benjaoming: Small index page tweak (#10358)
@humitos: Email: trust GH and GL emails and mark them as verified (#10357)
@humitos: Docs: note explaining
build.apt_packages
doesn’t work withbuild.commands
(#10347)@humitos: Requirements: upgrade DDT to avoid an issue (#10340)
@benjaoming: Bump sphinx-rtd-theme to 1.2.1 (#10338)
@humitos: Build: allow multi-line commands on
build.commands
(#10334)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10330)
@stsewd: Organizations: allow users without organizations to see their own profiles (#10329)
@benjaoming: Organizations: Organization chooser page (#10325)
@benjaoming: Proxito: Search scope narrowed to active project (version, translation or subproject 404s) (#10324)
@stsewd: Proxito: redirect to default version from root language (#10313)
@stsewd: API V3: clean version when deactivated and build version when activated (#10308)
@stsewd: Builds: avoid breaking builds when adding a new field to our APIs (#10295)
@benjaoming: Docs: Update “How to import private repositories” (Diátaxis) (#10251)
@benjaoming: Docs: Relabel howto guides for Git repository configuration (Diátaxis) (#10247)
Version 9.13.0
- Date:
May 23, 2023
@humitos: Build: allow multi-line commands on
build.commands
(#10334)@stsewd: Organizations: allow users without organizations to see their own profiles (#10329)
@benjaoming: Proxito: Search scope narrowed to active project (version, translation or subproject 404s) (#10324)
@stsewd: API V3: clean version when deactivated and build version when activated (#10308)
@agjohnson: Change a few configuration file options from required to not required (#10303)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10298)
@stsewd: Build: use same version of setuptools when using
system_packages
(#10287)@ericholscher: Release 9.12.0 (#10284)
@benjaoming: Allow build.commands without build.tools (#10281)
Version 9.12.0
- Date:
May 02, 2023
@benjaoming: Allow build.commands without build.tools (#10281)
@benjaoming: Remove raise_for_exception=False tests (#10280)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10278)
@benjaoming: Dev: Disable cacheops in proxito docker environment (#10274)
@stsewd: Tests: be explicit about the privacy level (#10273)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10267)
@benjaoming: Backend: Make Features ordered in a nice way (#10262)
@stsewd: Proxito: allow overlapping public and external version domains (#10260)
@ericholscher: Revert “Proxito: inject hosting integration header for
build.commands
(#10219)” (#10259)@ericholscher: Release 9.11.0 (#10255)
@benjaoming: Docs: Style guide stash (#10250)
@benjaoming: Docs: New entries to glossary (#10249)
@stsewd: Proxito: handle http to https redirects for all requests (#10199)
@ericholscher: Fix checking of PR status (#10085)
@ewdurbin: implement multiple .readthedocs.yml files per repo (#10001)
@benjaoming: Contextualize 404 page (#9657)
Version 9.11.0
- Date:
April 18, 2023
@benjaoming: Fix a little test failure (#10248)
@benjaoming: Scripts: Add export statements and instruction to fetch awscli (compile_version_upload_s3.sh) (#10245)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10244)
@ericholscher: Release 9.10.1 (#10235)
@dependabot[bot]: Bump peter-evans/create-pull-request from 4 to 5 (#10233)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10232)
@agjohnson: Add notes on private repo support in our install docs (#10230)
@stsewd: Analytics API: check if absolute_uri isn’t present (#10227)
@humitos: Proxito: inject hosting integration header for
build.commands
(#10219)@humitos: API: hosting integrations endpoint versioning/structure (#10216)
@benjaoming: Search: index <dl>s as sections and remove Sphinx domain logic (#10128)
@ewdurbin: implement multiple .readthedocs.yml files per repo (#10001)
Version 9.10.1
- Date:
April 11, 2023
@dependabot[bot]: Bump peter-evans/create-pull-request from 4 to 5 (#10233)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10232)
@agjohnson: Add notes on private repo support in our install docs (#10230)
@stsewd: Analytics API: check if absolute_uri isn’t present (#10227)
@humitos: Docs: minor changes to examples for consistency (#10225)
@benjaoming: Docs: Experiment with canonical url using READTHEDOCS_CANONICAL_URL (#10224)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10215)
@stsewd: Proxito: Test infinite redirect on non-existing PDFs (#10212)
@stsewd: API V3: support privacy levels on projects and versions (#10210)
@agjohnson: Fix filter positional arguments (#10202)
@benjaoming: Docs: Gather definitions in the same dl on main index page (#10201)
@humitos: Build: hardcode the Docker username for now (#10172)
@humitos: Build: expose VCS-related environment variables (#10168)
@agjohnson: Automation rules: model text changes for UI (#10138)
@stsewd: Unify feature check for organization/project (#8920)
Version 9.10.0
- Date:
March 28, 2023
@humitos: Javascript client: search-as-you-type API response (#10196)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10192)
@agjohnson: Filters: several bug fixes and some filter tuning (#10191)
@agjohnson: Make our homepage conditional on the new dashboard (#10189)
@benjaoming: Docs: Changes to main index page (Diátaxis) (#10186)
@stsewd: Proxito: allow serving files under the projects dir (#10180)
@stsewd: Redirects: test redirects with projects prefix (#10179)
@benjaoming: Docs: Removal of implicit Intersphinx reference labels to MyST-based documentation (#10176)
@agjohnson: Replace nvm/asdf with native CircleCI node installation (#10174)
@humitos: Build: hardcode the Docker username for now (#10172)
@ericholscher: Release 9.9.1 (#10169)
@humitos: Build: expose VCS-related environment variables (#10168)
@humitos: Build: export READTHEDOCS_CANONICAL_URL variable (#10166)
@humitos: Project: only support Git as VCS for new projects (#10114)
Version 9.9.1
- Date:
March 21, 2023
@humitos: Build: use safe_open for security reasons (#10165)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10163)
@agjohnson: Update some docs for the new dashboard templates (#10161)
@ericholscher: Revert 92a7182af42e26cab01265d2cc06fc7832832689 (#10158)
@humitos: Lint: update common to get the latest linting changes (#10154)
@stsewd: Proxito: don’t check for index.html if the path already ends with
/
. (#10153)@ericholscher: Docs: Disable PDF builds for now (#10152)
@stsewd: Put back template_name on proxito 404 view (#10149)
@silopolis: Fix doc_builder exceptions messages typos and spelling (#10147)
@stsewd: Proxito: redirect http->https for public domains (#10142)
@benjaoming: Removing non-used requirements file lint.in (#10140)
@humitos: Build: pass
PATH
environment variable to Docker container (#10133)@benjaoming: Docs: New how-to sublevels (Diátaxis) (#10131)
@humitos: Hosting: manual integrations via build contract (#10127)
@benjaoming: Docs: emojis in TOC captions, FontAwesome on external links in TOC (Diátaxis) (#10039)
Version 9.9.0
- Date:
March 14, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10139)
@ericholscher: Fix typo (#10130)
@humitos: Lint: one step forward through linting our code (#10129)
@humitos: Build: check for
_build/html
directory and fail if exists (#10126)@stsewd: Proxito: actually cache robots.txt and sitemap.xml (#10123)
@humitos: Build: pass shell commands directly (
build.jobs
/build.commands)
(#10119)@humitos: Downloadable artifacts: make PDF and ePub opt-in by default (#10115)
@humitos: Build: fail PDF command (
latexmk
) if exit code != 0 (#10113)@humitos: pre-commit: move
prospector
inside pre-commit (#10105)@agjohnson: Add beta version of doc diff library for testing (#9546)
Version 9.8.0
- Date:
March 07, 2023
@humitos: Downloadable artifacts: make PDF and ePub opt-in by default (#10115)
@humitos: Development: allow to define the logging level via an env variable (#10109)
@humitos: Celery: cheat
job_status
view to returnfinished
after 5 polls (#10107)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10104)
@stsewd: Canonical redirects: check if the project supports custom domains (#10098)
@benjaoming: Docs: Move a reference and remove an empty paranthesis (#10093)
@benjaoming: Docs: Update documentation for search.ignore (#10091)
@benjaoming: Fix intersphinx references to myst-parser (updated in myst-parser 0.19) (#10090)
@humitos: Analytics: add Plausible to our dashboard (#10087)
@ericholscher: Add X-Content-Type-Options as a custom domain header (#10062)
@stsewd: Proxito: adapt unresolver to make it usable for proxito (#10037)
@agjohnson: Add beta version of doc diff library for testing (#9546)
@davidfischer: Support the new Google analytics gtag.js (#7691)
Version 9.7.0
This release contains one security fix. For more information, see:
- Date:
February 28, 2023
@humitos: Celery: delete Telemetry data that’s at most 3 months older (#10079)
@humitos: Celery: consider only
PageView
from the last 3 months (#10078)@humitos: Celery: limit
archive_builds_task
query to last 90 day ago (#10077)@humitos: Proxito: use
django-cacheops
to cache some querysets (#10075)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10072)
@ericholscher: Docs: Add opengraph (#10066)
@ericholscher: Subscriptions: Set organization name in Stripe (#10064)
@benjaoming: Support delisting of projects (#10060)
@benjaoming: Docs: Fix undeclared labels after refactor + fix root causes (#10059)
@benjaoming: Docs: Replace duplicate information about staff and contributors with a seealso:: (#10056)
@benjaoming: Docs: Use “Sentence case” for titles (#10055)
@ericholscher: Make fancy new build failed email (#10054)
@humitos: Revert “Requirements: unpin
newrelic
(#10041)” (#10052)@humitos: Build: log usage of old output directory
_build/html
(#10038)@benjaoming: Pin django-filter (#2499)
Version 9.6.0
- Date:
February 21, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10045)
@benjaoming: Docs: emojis in TOC captions, FontAwesome on external links in TOC (Diátaxis) (#10039)
@ericholscher: Merge Diataxis into
main
! (#10034)@ericholscher: Docs: Upgrade Sphinx & sphinx_rtd_theme (#10033)
@stsewd: Proxito: use unresolved domain on page redirect view (#10032)
@ericholscher: Docs: Refactor Reproducible Builds page (Diátaxis) (#10030)
@stsewd: Proxito: make use un project from unresolved_domain in some views (#10029)
@ericholscher: Docs: Refactor the build & build customization pages (Diátaxis) (#10028)
@stsewd: Proxito: move “canonicalizing” logic to docs view (#10027)
@benjaoming: Docs: Navigation reorder (Diátaxis) (#10026)
@humitos: Embed API: Glossary terms sharing description (Sphinx) (#10024)
@humitos: Builds: ignore cancelling the build at “Uploading” state (#10006)
@humitos: Build: expose
READTHEDOCS_VIRTUALENV_PATH
variable (#9971)
Version 9.5.0
This release contains one security fix. For more information, see:
- Date:
February 13, 2023
@agjohnson: Bump to latest common (#10019)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#10014)
@benjaoming: Docs: Very small text update (#10012)
@sondalex: Fix code block indentation in Jupyter user guide (#10008)
@benjaoming: Docs: Refactor all business features into feature reference + change “privacy level” page (Diátaxis) (#10007)
@benjaoming: Docs: Relabel SEO guide as explanation (Diátaxis) (#10004)
@stsewd: Use new maintained django-cors-headers package (#10000)
@agjohnson: Fix ordering of filter for most recently built project (#9992)
@benjaoming: Docs: Refactor security logs as reference (Diátaxis) (#9985)
@humitos: Settings: simplify all the settings removing a whole old layer (
dev
) (#9978)@humitos: Build: expose
READTHEDOCS_VIRTUALENV_PATH
variable (#9971)@benjaoming: Docs: Refactor “Environment variables” into 3 articles (Diátaxis) (#9966)
@benjaoming: Docs: Split “Automation rules” into reference and how-to (Diátaxis) (#9953)
@stsewd: Subscriptions: use getattr for getting related organization (#9932)
@ericholscher: Allow searching & filtering VersionAutomationRuleAdmin (#9917)
@humitos: Build: use environment variable
$READTHEDOCS_OUTPUT
to define output directory (#9913)
Version 9.4.0
This release contains one security fix. For more information, see:
- Date:
February 07, 2023
@agjohnson: Fix ordering of filter for most recently built project (#9992)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9987)
@humitos: Docs: remove outdated and complex code and dependencies (#9981)
@humitos: Settings: simplify all the settings removing a whole old layer (
dev
) (#9978)@humitos: Development: use
gunicorn
forweb
andproxito
(#9977)@stsewd: Subscriptions: match stripe customer description with org name (#9976)
@humitos: Build: expose
READTHEDOCS_VIRTUALENV_PATH
variable (#9971)@benjaoming: Docs: Remove html_theme_path from conf.py (#9923)
@benjaoming: Docs: Relabel Automatic Redirects as “Incoming links: Best practices and redirects” (Diátaxis) (#9896)
@mwtoews: Docs: add warning that pull requests only build HTML and not other formats (#9892)
@ericholscher: Fix status reporting on PRs with the magic exit code (#9807)
@benjaoming: Do not assign html_theme_path (#9654)
@davidfischer: Switch to universal analytics (#3495)
Version 9.3.1
- Date:
January 30, 2023
@ericholscher: Add documentation page on Commercial subscriptions (#9963)
@humitos: MkDocs builder: use proper relative path for
--site-dir
(#9962)@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9960)
@humitos: Build: rclone retries when uploading artifacts (#9954)
@benjaoming: Docs: Relabel badges as feature reference (Diátaxis) (#9951)
@benjaoming: Docs: Make the GSOC page orphaned (Diátaxis) (#9949)
@agjohnson: Translations: a few copy issues and translator requests (#9937)
@humitos: Logging: log slugs when at least one of their builds was finished (#9928)
@benjaoming: Docs: Relabel pages to new top-level “Reference/Policies and legal documents” (Diátaxis) (#9916)
@benjaoming: Docs: Move Main Features and Feature Flags to “Reference/Features” (Diátaxis) (#9915)
@benjaoming: Docs: Add new section “How-to / Troubleshooting” and move 2 existing troubleshooting pages (#9914)
@stsewd: CORS: don’t allow to pass credentials by default (#9904)
@benjaoming: CI: Add option
--show-diff-on-failure
to pre-commit (#9893)@stsewd: Build storage: add additional checks for the source dir (#9890)
@humitos: Git backend: make
default_branch
to point to VCS’ default branch (#9424)@ericholscher: Make Build models default to
triggered
(#8031)
Version 9.3.0
- Date:
January 24, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9925)
@benjaoming: Docs: FAQ title/question tweak (#9919)
@benjaoming: Docs: Move and update FAQ (Diátaxis) (#9908)
@ericholscher: Release 9.2.0 (#9905)
@stsewd: CORS: don’t allow to pass credentials by default (#9904)
@abe-101: rm mention of docs/requirements.txt from tutorial (#9902)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9898)
@benjaoming: Docs: Relabel Server Side Search (#9897)
@humitos: Build: standardize output directory for artifacts (#9888)
@humitos: Command
contact_owners
: add support to filter by usernames (#9882)@benjaoming: Park resolutions to common build problems in FAQ (#9472)
Version 9.2.0
This release contains two security fixes. For more information, see our GitHub advisories:
- Date:
January 16, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9898)
@benjaoming: UI updates to Connected Accounts (#9891)
@agjohnson: Replace DPA text with link to our presigned DPA (#9883)
@sethfischer: Docs: correct Python console block type (#9880)
@sethfischer: Docs: update build customization Poetry example (#9879)
@humitos: EmbedAPI: decode filepath before open them from S3 storage (#9860)
@benjaoming: Docs: Additions to style guide - placeholders, seealso::, Diátaxis and new word list entry (#9840)
@benjaoming: Docs: Relabel and move explanation and how-tos around OAuth (Diátaxis) (#9834)
@benjaoming: Docs: Split Custom Domains as Explanation and How-to Guide (Diátaxis) (#9676)
Version 9.1.3
- Date:
January 10, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9872)
@benjaoming: Move reference labels outside of tabs (#9866)
@humitos: EmbedAPI: decode filepath before open them from S3 storage (#9860)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9853)
@ericholscher: Remove intercom from our DPA list (#9846)
@agjohnson: API: add project name/slug filters (#9843)
@benjaoming: Docs: Relabel Organizations as Explanation (Diátaxis) (#9836)
@ericholscher: Docs: Add subset of tests to testing docs (#9817)
@ericholscher: Docs: Refactor downloadable docs (#9768)
Version 9.1.2
- Date:
January 03, 2023
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9845)
@agjohnson: Update common submodule (#9841)
@benjaoming: Docs: Relabel Organizations as Explanation (Diátaxis) (#9836)
@benjaoming: Docs: Relabel “Single version documentation” documentation from feature to explanation (Diátaxis) (#9835)
@benjaoming: Docs: Relabel the “Science” page as Explanation (#9832)
@humitos: Build details page: normalize/trim command paths (second attempt) (#9831)
@benjaoming: Label for subproject select renamed “Child” => “Subproject” + help text added (#9829)
@stsewd: API V2: test that command is actually saved (#9827)
@benjaoming: Removes fetching of main branch (#9826)
@humitos: Test: path is trimmed when returned by the API (#9824)
@humitos: Dependencies: use backtracking pip’s resolver (#9821)
@benjaoming: Docs: Split Subprojects in Explanation and How-to (Diátaxis) (#9785)
@benjaoming: Docs: Split Traffic Analytics to a How-to guide and a Feature entry (Diátaxis) (#9677)
Version 9.1.1
- Date:
December 20, 2022
@humitos: Dependencies: use backtracking pip’s resolver (#9821)
@benjaoming: Use sphinx-rtd-theme 1.2.0rc1 (#9818)
@ericholscher: Add subset of tests to testing docs (#9817)
@humitos: Build details page: normalize/trim command paths (#9815)
@ericholscher: Break documentation style guide out into its own file (#9813)
@ericholscher: Disable Sphinx mimetype errors on epub (#9812)
@ericholscher: Docs: Update security log wording (#9811)
@benjaoming: Docs: Fix build 3 warnings (#9809)
@benjaoming: Fix silent, then loud failure after Tox 4 upgrade (#9803)
@ericholscher: Docs: Split SSO docs into HowTo/Explanation (Diátaxis) (#9801)
@juantocamidokura: Docs: Remove outdated and misleading Poetry guide (#9794)
@benjaoming: CI builds: Checkout main branch in a robust way (#9793)
@ericholscher: Release 9.1.0 (#9792)
@benjaoming: Docs: Relabel Localization as Explanation (Diátaxis) (#9790)
@benjaoming: Fix Circle CI builds: Tox 4 compatibility, add external commands to allowlist (#9789)
@benjaoming: Do not build documentation in Circle CI, Read the Docs handles that :100: (#9788)
@benjaoming: Docs: Move “Choosing between our two platforms” to Explanation (Diátaxis) (#9784)
@benjaoming: Docs: Change “downloadable” to “offline” (#9782)
@benjaoming: Adds missing translation strings (#9770)
@benjaoming: Docs: Split up Pull Request Builds into a how-to guide and reference (Diátaxis) (#9679)
@benjaoming: Docs: Split Custom Domains as Explanation and How-to Guide (Diátaxis) (#9676)
@benjaoming: Docs: Split and relabel VCS integration as explanation and how-to (Diátaxis) (#9675)
Version 9.1.0
This release contains an important security fix. See more information on the GitHub advisory.
- Date:
December 08, 2022
@benjaoming: Docs: Move “Choosing between our two platforms” to Explanation (Diátaxis) (#9784)
@benjaoming: Abandoned Projects policy: Relax reachability requirement (#9783)
@benjaoming: Docs: Change “downloadable” to “offline” (#9782)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9775)
@humitos: Settings: define default MailerLite setting (#9769)
@ericholscher: Refactor downloadable docs (#9768)
Version 9.0.0
This version upgrades our Search API experience to a v3.
- Date:
November 28, 2022
@Jean-Maupas: A few text updates (#9761)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9760)
@benjaoming: Docs: 4 diátaxis categories at the top of the navigation sidebar (Diátaxis iteration 0) (#9758)
@ericholscher: Be more explicit where go to in VCS intstructions (#9757)
@benjaoming: Docs: Adding a pattern for reusing “Only on Read the Docs for Business” admonition (Diátaxis refactor) (#9754)
@stsewd: Subscriptions: attach stripe subscription to organizations (#9751)
@stsewd: Search: fix parsing of parameters inside sphinx domains (#9750)
@eltociear: Fix typo in private.py (#9744)
@browniebroke: Docs: update instructions to install deps with Poetry (#9743)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9742)
@humitos: Docs: cancel PR builds if there is no documentation changes (#9734)
@humitos: Docs: add an example for custom domain input (#9733)
@ericholscher: Add an initial policy for delisting unmaintained projects (#9731)
@humitos: Docs:
poetry
example onbuild.jobs
section (#9445)
Version 8.9.0
- Date:
November 15, 2022
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9728)
@ericholscher: Release 8.8.1 (#9724)
@stsewd: Proxito: don’t depend on attributes injected in the request (#9711)
@stsewd: Unresolver: support external versions for single version projects (#9709)
@humitos: Build: skip build based on commands’ exit codes (#9649)
@ericholscher: Change mailing list subscription to when the user validates their email (#9384)
Version 8.8.1
This release contains a security fix, which is the most important part of the update.
- Date:
November 09, 2022
Security fix: https://github.com/readthedocs/readthedocs.org/security/advisories/GHSA-98pf-gfh3-x3mp
@stsewd: Unresolver: support external versions for single version projects (#9709)
Version 8.8.0
- Date:
November 08, 2022
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9714)
@humitos: Build: bump
readthedocs-sphinx-ext
to<2.3
(#9707)@benjaoming: Bump to sphinx-rtd-theme to 1.1.0 (#9701)
@humitos: GHA: only run the preview links action on
docs/
path (#9696)@humitos: Telemetry: not collect Sphinx data if there is no
conf.py
(#9695)@stsewd: Subscriptions: don’t remove stripe id on canceled subscriptions (#9693)
@ericholscher: Release 8.7.1 (#9691)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9688)
@benjaoming: Docs: Split up Build Notifications into feature/reference and how-to (Diátaxis) (#9686)
@dojutsu-user: Run
blacken-docs
precommit hook on all files (#9672)@benjaoming: Proposal for sphinxcontrib-jquery (#9665)
@stsewd: Subscriptions: use djstripe events to mail owners (#9661)
@benjaoming: Docs: Use current year instead of hard-coded 2010 (#9660)
Version 8.7.1
- Date:
October 24, 2022
@benjaoming: Docs: Comment out the science contact form (#9674)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9663)
@benjaoming: Docs: Use current year instead of hard-coded 2010 (#9660)
@benjaoming: Adds more basic info to the default 404 page (#9656)
@humitos: Settings: enable
django-debug-toolbar
when Django Admin is enabled (#9641)@humitos: Telemetry: track Sphinx
extensions
andhtml_theme
variables (#9639)@evildmp: Docs: Made some small changes to the MyST migration how-to (#9620)
@dojutsu-user: Add admin functions for wiping a version (#5140)
Version 8.7.0
- Date:
October 11, 2022
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9648)
@humitos: Settings: enable
django-debug-toolbar
when Django Admin is enabled (#9641)@stsewd: Subscriptions: use stripe price instead of relying on plan object (#9640)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9636)
@ericholscher: Release 8.6.0 (#9630)
@benjaoming: Docs: Re-scope Intersphinx article as a how-to (#9622)
@evildmp: Made some small changes to the MyST migration how-to (#9620)
@stsewd: Email: render template before sending it to the task (#9538)
Version 8.6.0
- Date:
September 28, 2022
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9621)
@evildmp: Made some small changes to the MyST migration how-to (#9620)
@boahc077: ci: add minimum GitHub at the workflow level for pip-tools.yaml (#9617)
@sashashura: GitHub Workflows security hardening (#9609)
@uvidyadharan: Update intersphinx.rst (#9601)
@ericholscher: Release 8.5.0 (#9600)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9596)
@stsewd: Unresolver: strict validation for external versions and other fixes (#9534)
Version 8.5.0
- Date:
September 12, 2022
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9596)
@humitos: OAuth: add logging for imported GitHub RemoteRepository (#9590)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9583)
@stsewd: Invitations: delete related invitations when deleting an object (#9582)
Version 8.4.3
- Date:
September 06, 2022
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9583)
@stsewd: Invitations: delete related invitations when deleting an object (#9582)
@stsewd: Use utility function domReady instead of JQuery’s .ready (#9579)
@humitos: Logging: log time spent to upload build artifacts (#9568)
@humitos: Docs: recommend using
pip
instead ofsetuptools
(#9567)@stsewd: Embed API: strip leading
/
before joining path (#9565)@ericholscher: Release 8.4.2 (#9558)
@ericholscher: Proxito redirects: pass full_path instead of re-creating it. (#9557)
@stsewd: Subscriptions: use stripe subscriptions to show details (#9550)
@benjaoming: Docs: HTML form for getting in touch with Read the Docs for Science (#9543)
@stsewd: Use djstripe models for organization subscriptions (#9486)
@stsewd: Ask for confirmation when adding a user to a project/organization/team (#9440)
@stsewd: Security logs: delete old user security logs (#8620)
Version 8.4.2
- Date:
August 29, 2022
@ericholscher: Proxito redirects: pass full_path instead of re-creating it. (#9557)
@benjaoming: Docs: HTML form for getting in touch with Read the Docs for Science (#9543)
@humitos: Dependencies: pin django-structlog to 2.2.1 (#9535)
@stsewd: Embedded js: remove more dependency on jquery (#9515)
@stsewd: Embedded js: remove some dependency from jquery (#9508)
@stsewd: Use djstripe models for organization subscriptions (#9486)
@benjaoming: Park resolutions to common build problems in FAQ (#9472)
Version 8.4.1
- Date:
August 23, 2022
@humitos: Dependencies: pin django-structlog to 2.2.1 (#9535)
@dependabot[bot]: Bump actions/setup-python from 3 to 4 (#9529)
@github-actions[bot]: Dependencies: all packages updated via pip-tools (#9528)
@stsewd: Teams: don’t send email notification when users adds themselves to a team (#9511)
@benjaoming: Removes rstcheck (#9505)
@benjaoming: Docs: sphinxcontrib-video was added incorrectly (#9501)
@agjohnson: Fix typo in build concurrency logging (#9499)
@humitos: Dependencies: use pip-tools for all our files (#9480)
@humitos: Dependencies: use GitHub Action + pip-tools (#9479)
@stsewd: Proxito: separate project slug extraction from request manipulation (#9462)
@stsewd: Ask for confirmation when adding a user to a project/organization/team (#9440)
Version 8.4.0
- Date:
August 16, 2022
@benjaoming: Docs: sphinxcontrib-video was added incorrectly (#9501)
@agjohnson: Fix typo in build concurrency logging (#9499)
@stsewd: Custom urlconf: support serving static files (#9496)
@humitos: Build: unpin Pillow for unsupported Python versions (#9473)
@benjaoming: Docs: Read the Docs for Science - new alternative with sphinx-design (#9460)
@stsewd: Ask for confirmation when adding a user to a project/organization/team (#9440)
Version 8.3.7
- Date:
August 09, 2022
@humitos: Build: unpin Pillow for unsupported Python versions (#9473)
@stsewd: Redirects: check only for hostname and path for infinite redirects (#9463)
@benjaoming: Fix missing indentation on reStructuredText badge code (#9404)
@stsewd: Embed JS: fix incompatibilities with sphinx 6.x (jquery removal) (#9359)
Version 8.3.6
- Date:
August 02, 2022
@stsewd: Build: use correct build environment for build.commands (#9454)
@benjaoming: Docs: Fixes warnings and other noisy build messages (#9453)
@ericholscher: Release 8.3.5 (#9452)
@humitos: GitHub Action: add link to Pull Request preview (#9450)
@humitos: OAuth: add logging for GitHub RemoteRepository (#9449)
@benjaoming: Docs: Adds Jupyter Book to examples table (#9446)
@humitos: Docs:
poetry
example onbuild.jobs
section (#9445)
Version 8.3.5
- Date:
July 25, 2022
@humitos: GitHub Action: add link to Pull Request preview (#9450)
@humitos: OAuth: add logging for GitHub RemoteRepository (#9449)
@benjaoming: Docs: Adds Jupyter Book to examples table (#9446)
@humitos: Docs:
poetry
example onbuild.jobs
section (#9445)@agjohnson: Update env var docs (#9443)
@ericholscher: Update dev domain to
devthedocs.org
(#9442)@humitos: Docs: mention
docsify
on “Build customization” (#9439)
Version 8.3.4
- Date:
July 19, 2022
Version 8.3.3
- Date:
July 12, 2022
@davidfischer: Stickybox ad fix (#9421)
@humitos: OAuth: unify the exception used for the user message (#9415)
@humitos: Docs: improve the flyout page to include a full example (#9413)
@humitos: OAuth: resync
RemoteRepository
weekly for active users (#9410)@stsewd: Analytics: make sure there is only one record with version=None (#9408)
@agjohnson: Add frontend team codeowners rules (#9407)
@naveensrinivasan: chore: Included githubactions in the dependabot config (#9396)
@benjaoming: Docs: Add an examples section (#9371)
Version 8.3.2
- Date:
July 05, 2022
@neilnaveen: chore: Set permissions for GitHub actions (#9394)
@stsewd: Telemetry: skip listing conda packages on non-conda envs (#9390)
@ericholscher: UX: Improve DUPLICATED_RESERVED_VERSIONS error (#9383)
@ericholscher: Release 8.3.1 (#9379)
@ericholscher: Properly log build exceptions in Celery (#9375)
@humitos: Middleware: use regular
HttpResponse
and log the suspicious operation (#9366)@ericholscher: Add an explicit flyout placement option (#9357)
@stsewd: PR previews: Warn users when enabling the feature on incompatible projects (#9291)
Version 8.3.1
- Date:
June 27, 2022
@ericholscher: Properly log build exceptions in Celery (#9375)
@humitos: Development: default value for environment variable (#9370)
@humitos: Middleware: use regular
HttpResponse
and log the suspicious operation (#9366)@humitos: Development: remove silent and use long attribute name (#9363)
@ericholscher: Fix glossary ordering (#9362)
@benjaoming: Do not list feature overview twice (#9361)
@agjohnson: Release 8.3.0 (#9358)
@ericholscher: Add an explicit flyout placement option (#9357)
@humitos: Development: allow to pass
--ngrok
when starting up (#9353)@humitos: Development: avoid path collision when running multiple builders (#9352)
@humitos: Security: avoid requests with NULL characters (0x00) on GET (#9350)
@humitos: Build: handle 422 response on send build status (#9347)
@benjaoming: Updates and fixes to Development Install guide (#9319)
@agjohnson: Add DMCA takedown request for project dicom-standard (#9311)
Version 8.3.0
- Date:
June 20, 2022
@humitos: Security: avoid requests with NULL characters (0x00) on GET (#9350)
@stsewd: Subscriptions: log subscription id when canceling (#9340)
@stsewd: Search: support section titles inside header tags (#9339)
@humitos: Local development: use
nodemon
to watch files instead ofwatchmedo
(#9338)@humitos: EmbedAPI: clean images (
src
) properly from inside a tooltip (#9337)@stsewd: Gold: log if the subscription has more than one item (#9334)
@humitos: EmbedAPI: handle special case for Sphinx manual references (#9333)
@benjaoming: Add
mc
client toweb
container (#9331)@humitos: Translations: migrate
tx/config
to new client’s version format (#9327)@benjaoming: Docs: Improve scoping of two potentially overlapping Triage sections (#9302)
Version 8.2.0
- Date:
June 14, 2022
@ericholscher: Docs: Small edits to add a couple keywords and clarify headings (#9329)
@humitos: Translations: integrate Transifex into our Docker tasks (#9326)
@stsewd: Subscriptions: handle subscriptions with multiple products/plans/items (#9320)
@benjaoming: Update the team page (#9309)
@ericholscher: Release 8.1.2 (#9300)
@ericholscher: Fix Docs CI (#9299)
@agjohnson: Update mentions of our roadmap to be current (#9293)
@stsewd: lsremote: set max split when parsing remotes (#9292)
@humitos: Tests: make
tests-embedapi
require regulartests
first (#9289)@ericholscher: Truncate output that we log from commands to 10 lines (#9286)
Version 8.1.2
- Date:
June 06, 2022
@ericholscher: Fix Docs CI (#9299)
@agjohnson: Update mentions of our roadmap to be current (#9293)
@stsewd: lsremote: set max split when parsing remotes (#9292)
@humitos: Tests: make
tests-embedapi
require regulartests
first (#9289)@agjohnson: Update 8.1.1 changelog with hotfixes (#9288)
@stsewd: Cancel build: get build from the current project (#9287)
@saadmk11: Remote repository: Add user admin action for syncing remote repositories (#9272)
Version 8.1.1
- Date:
Jun 1, 2022
Version 8.1.0
- Date:
May 24, 2022
@humitos: Assets: update
package-lock.json
with newer versions (#9262)@agjohnson: Improve contributing dev doc (#9260)
@agjohnson: Update translations, pull from Transifex (#9259)
@humitos: Build: solve problem with sanitized output (#9257)
@humitos: Docs: improve “Environment Variables” page (#9256)
@humitos: Docs: jsdoc example using
build.jobs
andbuild.tools
(#9241)@stsewd: Docker environment: check for None on stdout/stderr response (#9238)
@stsewd: Proxied static files: use its own storage class (#9237)
@ericholscher: Release 8.0.2 (#9234)
@humitos: Development: only pull the images required (#9182)
@stsewd: Proxito: serve static files from the same domain as the docs (#9168)
@humitos: Project: use
RemoteRepository
to definedefault_branch
(#8988)@humitos: Design doc: forward path to a future builder (#8190)
Version 8.0.2
- Date:
May 16, 2022
@agjohnson: Disable codecov annotations (#9186)
@choldgraf: Note sub-folders with a single domain. (#9185)
@stsewd: BuildCommand: add option to merge or not stderr with stdout (#9184)
@agjohnson: Fix bumpver issue (#9181)
@agjohnson: Release 8.0.1 (#9180)
@agjohnson: Spruce up docs on pull request builds (#9177)
@ericholscher: Fix RTD branding in the code (#9175)
@agjohnson: Fix copy issues on model fields (#9170)
@stsewd: Proxito: serve static files from the same domain as the docs (#9168)
@stsewd: User: delete organizations where the user is the last owner (#9164)
@ericholscher: Add a basic djstripe integration (#9087)
@stsewd: Custom domains: don’t allow adding a custom domain on subprojects (#8953)
Version 8.0.1
- Date:
May 09, 2022
@ericholscher: Fix RTD branding in the code (#9175)
@ericholscher: Remove our old out-dated architecture diagram (#9169)
@humitos: Docs: mention
ubuntu-22.04
as a valid option (#9166)@ericholscher: Initial test of adding plan to CDN (#9163)
@ericholscher: Fix links in docs from the build page refactor (#9162)
@ericholscher: Note build.jobs required other keys (#9160)
@ericholscher: Add docs showing pip-tools usage on dependencies (#9158)
@ericholscher: Expierment with pip-tools for our docs.txt requirements (#9124)
@ericholscher: Add a basic djstripe integration (#9087)
Version 8.0.0
- Date:
May 03, 2022
Note
We are upgrading to Ubuntu 22.04 LTS and also to Python 3.10.
Projects using Mamba with the old feature flag, and now removed, CONDA_USES_MAMBA
,
have to update their .readthedocs.yaml
file to use build.tools.python: mambaforge-4.10
to continue using Mamba to create their environment.
See more about build.tools.python
at https://docs.readthedocs.io/en/stable/config-file/v2.html#build-tools-python
@humitos: Mamba: remove CONDA_USES_MAMBA feature flag (#9153)
@ericholscher: Remove prebuild step so docs keep working (#9143)
@ericholscher: Release 7.6.2 (#9140)
@humitos: Docs: feature documentation for
build.jobs
(#9138)@humitos: External versions: save state (open / closed) (#9128)
@OriolAbril: add note on setting locale_dirs (#8972)
Version 7.6.2
- Date:
April 25, 2022
@stsewd: Analytics: add feature flag to skip tracking 404s (#9131)
@humitos: External versions: save state (open / closed) (#9128)
@stsewd: git: respect SKIP_SYNC_* flags when using lsremote (#9125)
@agjohnson: Release 7.6.1 (#9123)
@pyup-bot: pyup: Scheduled weekly dependency update for week 16 (#9121)
@thomasrockhu-codecov: ci: add informational Codecov status checks (#9119)
@stsewd: Build: use gvisor for projects using build.jobs (#9114)
@humitos: Docs: call
linkcheck
Sphinx builder for our docs (#9091)
Version 7.6.1
- Date:
April 19, 2022
Version 7.6.0
- Date:
April 12, 2022
@stsewd: Celery: workaround fix for bug on retrying builds (#9096)
@ericholscher: Try to fix .com tests (#9092)
@humitos: Notification: don’t send it on build retry (#9086)
@humitos: Build: bugfix
RepositoryError.CLONE_ERROR
message (#9083)@stsewd: Proxito: only check for index files if there is a version (#9079)
@stsewd: Adapt scripts and docs to make use of the new github personal tokens (#9078)
@ericholscher: Release 7.5.1 (#9074)
@pyup-bot: pyup: Scheduled weekly dependency update for week 14 (#9073)
@agjohnson: Add gVisor runtime option for build containers (#9066)
@humitos: Proxito: do not serve non-existent versions (#9048)
@humitos: SyncRepositoryTask: rate limit to 1 per minute per project (#9021)
@humitos: Build: implement
build.jobs
config file key (#9016)
Version 7.5.1
- Date:
April 04, 2022
@humitos: Build: use same hack for VCS and build environments (#9055)
@ericholscher: Fix jinja2 on embed tests (#9053)
@jsquyres: director.py: restore READTHEDOCS_VERSION_[TYPE|NAME] (#9052)
@ericholscher: Fix tests around jinja2 (#9050)
@humitos: Build: do not send VCS build status on specific exceptions (#9049)
@humitos: Proxito: do not serve non-existent versions (#9048)
@agjohnson: Release 7.5.0 (#9047)
@humitos: Build: Mercurial (
hg
) compatibility with old versions (#9042)@eyllanesc: Fixes link (#9041)
@ericholscher: Fix jinja2 pinning on Sphinx 1.8 feature flagged projects (#9036)
@humitos: SyncRepositoryTask: rate limit to 1 per minute per project (#9021)
@humitos: Build: use same build environment for setup and build (#9018)
@humitos: Build: implement
build.jobs
config file key (#9016)@abravalheri: Improve displayed version name when building from PR (#8237)
Version 7.5.0
- Date:
March 28, 2022
@humitos: Build: Mercurial (
hg
) compatibility with old versions (#9042)@eyllanesc: Fixes link (#9041)
@ericholscher: Fix jinja2 pinning on Sphinx 1.8 feature flagged projects (#9036)
@agjohnson: Add bumpver configuration (#9029)
@davidfischer: Update the community ads application link (#9028)
@ericholscher: Don’t use master branch explicitly in requirements (#9025)
@humitos: GitHub OAuth: use bigger pages to make fewer requests (#9020)
@humitos: Build: use same build environment for setup and build (#9018)
@pyup-bot: pyup: Scheduled weekly dependency update for week 11 (#9012)
@humitos: Build: allow users to use Ubuntu 22.04 LTS (#9009)
@humitos: Build: proof of concept for pre/post build commands (
build.jobs
) (#9002)
Version 7.4.2
- Date:
March 14, 2022
@agjohnson: Release 7.4.1 (#9004)
@pyup-bot: pyup: Scheduled weekly dependency update for week 10 (#9003)
@humitos: API: validate
RemoteRepository
when creating aProject
(#8983)@dogukanteber: Use django-storages’ manifest files class instead of the overriden class (#8781)
@abravalheri: Improve displayed version name when building from PR (#8237)
Version 7.4.1
- Date:
March 07, 2022
@humitos: Requirements: remove
django-permissions-policy
(#8987)@stsewd: Archive builds: avoid filtering by commands__isnull (#8986)
@humitos: API: validate
RemoteRepository
when creating aProject
(#8983)@humitos: Celery: trigger
archive_builds
frequently with a lower limit (#8981)@pyup-bot: pyup: Scheduled weekly dependency update for week 09 (#8977)
@stsewd: MkDocs: allow None on extra_css/extra_javascript (#8976)
@stsewd: Docs: warn about custom domains on subprojects (#8945)
@dogukanteber: Use django-storages’ manifest files class instead of the overriden class (#8781)
@nienn: Docs: Add links to documentation on creating custom classes (#8466)
@stsewd: Integrations: allow to pass more data about external versions (#7692)
Version 7.4.0
- Date:
March 01, 2022
@humitos: Celery: increase timeout limit for
sync_remote_repositories
task (#8974)@agjohnson: Fix a couple integration admin bugs (#8964)
@humitos: Build: allow NULL when updating the config (#8962)
@agjohnson: Release 7.3.0 (#8957)
@pyup-bot: pyup: Scheduled weekly dependency update for week 08 (#8954)
@humitos: Requirements: upgrade gitpython because of security issue (#8950)
@agjohnson: Pin storages with boto3 (#8947)
@humitos: Build: reset build error before start building (#8943)
@humitos: Django3: use new JSON fields instead of old TextFields (#8934)
@humitos: Build: ability to cancel a running build from dashboard (#8850)
Version 7.3.0
- Date:
February 21, 2022
@humitos: Requirements: upgrade gitpython because of security issue (#8950)
@agjohnson: Pin storages with boto3 (#8947)
@humitos: Build: reset build error before start building (#8943)
@humitos: Django3: use new JSON fields instead of old TextFields (#8934)
@agjohnson: Tune build config migration (#8931)
@humitos: Build: use
ubuntu-20.04
image for setup VCS step (#8930)@humitos: Sentry and Celery: do not log
RepositoryError
in Sentry (#8928)@ericholscher: Add x-hoverxref-version to CORS (#8927)
@humitos: Deploy: avoid locking the table when adding new JSON field (#8926)
@pyup-bot: pyup: Scheduled weekly dependency update for week 07 (#8915)
Version 7.2.1
- Date:
February 15, 2022
@humitos: Build: do not send notifications on known failed builds (#8918)
@humitos: Celery: use
on_retry
to handleBuildMaxConcurrencyError
(#8917)@agjohnson: Throw an exception from Celery retry() (#8905)
@agjohnson: Reduce verbose logging on generic command failure (#8904)
@humitos: Build: allow to not record commands on sync_repository_task (#8899)
@stsewd: Support for CDN when privacy levels are enabled (#8896)
@ericholscher: Don’t be so excited always in our emails :) (#8888)
@humitos: Django3: delete old JSONField and use the new ones (#8869)
@humitos: Django3: add new
django.db.models.JSONField
(#8868)
Version 7.2.0
- Date:
February 08, 2022
@ericholscher: Don’t be so excited always in our emails :) (#8888)
@stsewd: CI: Don’t install debug tools when running tests (#8882)
@agjohnson: Fix issue with build task routing and config argument (#8877)
@humitos: Celery: use an internal namespace to store build task’s data (#8874)
@agjohnson: Release 7.1.2 (#8873)
@agjohnson: Release 7.1.1 (#8872)
@humitos: Task router: check new config
build.tools.python
for conda (#8855)
Version 7.1.2
- Date:
January 31, 2022
Version 7.1.1
- Date:
January 31, 2022
@humitos: Task router: check new config
build.tools.python
for conda (#8855)@stsewd: AuditLog: always fill organization id & slug (#8846)
@humitos: Docs: remove beta warning from config file’s
build
key (#8843)@agjohnson: Fix more casing issues (#8842)
@agjohnson: Update choosing a platform doc (#8837)
@pyup-bot: pyup: Scheduled weekly dependency update for week 04 (#8835)
Version 7.1.0
- Date:
January 25, 2022
@astrojuanlu: Detail what URLs are expected in issue template (#8832)
@humitos: Cleanup: delete unused Django management commands (#8830)
@simonw: Canonical can point as stable, not just latest (#8828)
@davidfischer: Use stickybox ad placement on RTD themed projects (#8823)
@ericholscher: Quiet the Unresolver logging (#8822)
@stsewd: Workaround for HttpExchange queries casting IDs as uuid/int wrongly (#8821)
@ericholscher: Release 7.0.0 (#8818)
@pyup-bot: pyup: Scheduled weekly dependency update for week 03 (#8817)
Version 7.0.0
This is our 7th major version! This is because we are upgrading to Django 3.2 LTS.
- Date:
January 17, 2022
@agjohnson: Release 6.3.3 (#8806)
@agjohnson: Fix linting issue on project private view (#8805)
@pyup-bot: pyup: Scheduled weekly dependency update for week 02 (#8804)
@astrojuanlu: Remove explicit username from tutorial (#8803)
@humitos: Bitbucket: update to match latest API changes (#8801)
@stsewd: API v3: check if the name generates a valid slug (#8791)
@astrojuanlu: Make commercial docs more visible (#8780)
@davidfischer: Make the analytics cookie a session cookie (#8694)
@ericholscher: Add ability to rebuild a specific build (#6995)
Version 6.3.3
- Date:
January 10, 2022
@pyup-bot: pyup: Scheduled weekly dependency update for week 02 (#8804)
@astrojuanlu: Remove explicit username from tutorial (#8803)
@humitos: Bitbucket: update to match latest API changes (#8801)
@ericholscher: Mention subproject aliases (#8785)
@humitos: Config file: system_site_packages overwritten from project’s setting (#8783)
@astrojuanlu: Make commercial docs more visible (#8780)
@humitos: Spam: allow to mark a project as (non)spam manually (#8779)
@davidfischer: Make the analytics cookie a session cookie (#8694)
Version 6.3.2
- Date:
January 04, 2022
@cagatay-y: Fix broken link in edit-source-links-sphinx.rst (#8788)
@pyup-bot: pyup: Scheduled weekly dependency update for week 52 (#8787)
@astrojuanlu: Cap setuptools even if installed packages are ignored (#8777)
@pyup-bot: pyup: Scheduled weekly dependency update for week 51 (#8776)
@astrojuanlu: Follow up to dev docs split (#8774)
@stsewd: API v3: improve message when using the API on the browser (#8768)
@stsewd: API v3: don’t include subproject_of on subprojects (#8767)
@davidfischer: Use ad client stickybox feature on RTD’s own docs (#8766)
@stsewd: API v3: explicitly test with RTD_ALLOW_ORGANIZATIONS=False (#8765)
@ericholscher: Release 6.3.1 (#8763)
@stsewd: Skip slug check when editing an organization (#8760)
@ericholscher: Fix EA branding in docs (#8758)
@pyup-bot: pyup: Scheduled weekly dependency update for week 50 (#8757)
@astrojuanlu: Add MyST Markdown examples everywhere (#8752)
Version 6.3.1
- Date:
December 14, 2021
@stsewd: Don’t run spam rules check after ban action (#8756)
@astrojuanlu: Add MyST Markdown examples everywhere (#8752)
@astrojuanlu: Update mambaforge to latest version (#8749)
@astrojuanlu: Remove sphinx-doc.org from external domains (#8747)
@humitos: Log: use structlog-sentry to send logs to Sentry (#8732)
@agjohnson: Release 6.3.0 (#8730)
@stsewd: Custom Domain: make cname_target configurable (#8728)
@stsewd: Test external serving for projects with
--
in slug (#8716)@astrojuanlu: Add guide to migrate from reST to MyST (#8714)
@astrojuanlu: Avoid future breakage of
setup.py
invokations (#8711)@humitos: structlog: migrate application code to better logging (#8705)
@ericholscher: Add ability to rebuild a specific build (#6995)
Version 6.3.0
- Date:
November 29, 2021
@humitos: Tests: run tests with Python3.8 in CircleCI (#8718)
@stsewd: Test external serving for projects with
--
in slug (#8716)@astrojuanlu: Avoid future breakage of
setup.py
invokations (#8711)@humitos: structlog: migrate application code to better logging (#8705)
@astrojuanlu: Add guide on Poetry (#8702)
Version 6.2.1
- Date:
November 23, 2021
@agjohnson: Fix issue with PR build hostname parsing (#8700)
@ericholscher: Fix sharing titles (#8695)
@humitos: Spam: make admin filters easier to understand (#8688)
@astrojuanlu: Clarify how to pin the Sphinx version (#8687)
@stsewd: Docs: update docs about search on subprojects (#8683)
@pyup-bot: pyup: Scheduled weekly dependency update for week 46 (#8680)
Version 6.2.0
- Date:
November 16, 2021
@rokroskar: docs: update faq to mention apt for dependencies (#8676)
@astrojuanlu: Add entry on Jupyter Book to the FAQ (#8669)
@humitos: Spam: sort admin filters and show threshold (#8666)
@humitos: Spam: check for spam rules after user is banned (#8664)
@humitos: Spam: use 410 - Gone status code when blocked (#8661)
@astrojuanlu: Upgrade readthedocs-sphinx-search (#8660)
@agjohnson: Release 6.1.2 (#8657)
@astrojuanlu: Update requirements pinning (#8655)
@stsewd: Historical records: set the change reason explicitly on the instance (#8627)
Version 6.1.2
- Date:
November 08, 2021
@astrojuanlu: Update requirements pinning (#8655)
@ericholscher: Fix GitHub permissions required (#8654)
@stsewd: Organizations: allow to add owners by email (#8651)
@pyup-bot: pyup: Scheduled weekly dependency update for week 44 (#8645)
@astrojuanlu: Document generic webhooks (#8609)
Version 6.1.1
- Date:
November 02, 2021
@agjohnson: Drop beta from title of build config option (#8637)
@astrojuanlu: Remove mentions to old Python version specification (#8635)
@Arthur-Milchior: Correct an example (#8628)
@davidfischer: Inherit theme template (#8626)
@astrojuanlu: Clarify duration of extra DNS records (#8625)
@astrojuanlu: Promote mamba more in the documentation, hide
CONDA_USES_MAMBA
(#8624)@davidfischer: Floating ad placement for docs.readthedocs.io (#8621)
@stsewd: Audit: track downloads separately from page views (#8619)
Version 6.1.0
- Date:
October 26, 2021
@astrojuanlu: Clarify docs (#8608)
@astrojuanlu: New Read the Docs tutorial, part III (and finale?) (#8605)
@humitos: SSO: re-sync VCS accounts for SSO organization daily (#8601)
@humitos: Django Action: re-sync SSO organization’s users (#8600)
@pyup-bot: pyup: Scheduled weekly dependency update for week 42 (#8598)
@saadmk11: Don’t show the rebuild option on closed/merged Pull Request builds (#8590)
@carltongibson: Adjust Django intersphinx link to stable version. (#8589)
@astrojuanlu: Documentation names cleanup (#8586)
@adamtheturtle: Fix typo “interpreters” (#8583)
@ericholscher: Small fixes to asdf image upload script (#8578)
@humitos: EmbedAPIv3: docs for endpoint and guide updated (#8566)
@stsewd: Domain: allow to disable domain creation/update (#8020)
Version 6.0.0
- Date:
October 13, 2021
This release includes the upgrade of some base dependencies:
Python version from 3.6 to 3.8
Ubuntu version from 18.04 LTS to 20.04 LTS
Starting from this release, all the Read the Docs code will be tested and QAed on these versions.
Version 5.25.1
- Date:
October 11, 2021
@astrojuanlu: Small fixes (#8564)
@deepto98: Moved authenticated_classes definitions from API classes to AuthenticatedClassesMixin (#8562)
@humitos: Build: update ca-certificates before cloning (#8559)
@humitos: Build: support Python 3.10.0 stable release (#8558)
@astrojuanlu: Document new
build
specification (#8547)@astrojuanlu: Add checkbox to subscribe new users to newsletter (#8546)
Version 5.25.0
- Date:
October 05, 2021
@humitos: Docs: comment about how to add a new tool/version for builders (#8548)
@astrojuanlu: Add checkbox to subscribe new users to newsletter (#8546)
@humitos: Script tools cache: fix environment variables (#8541)
@humitos: EmbedAPIv3: proxy URLs to be available under
/_/
(#8540)@humitos: Requirement: ping django-redis-cache to git tag (#8536)
@pyup-bot: pyup: Scheduled weekly dependency update for week 39 (#8531)
@astrojuanlu: Promote and restructure guides (#8528)
@stsewd: HistoricalRecords: add additional fields (ip and browser) (#8490)
Version 5.24.0
- Date:
September 28, 2021
Version 5.23.6
- Date:
September 20, 2021
@astrojuanlu: Change newsletter form (#8509)
@stsewd: Contact users: Allow to pass additional context to each email (#8507)
@astrojuanlu: Update onboarding (#8504)
@astrojuanlu: List default installed dependencies (#8503)
@astrojuanlu: Clarify that the development installation instructions are for Linux (#8494)
@astrojuanlu: Add virtual env instructions to local installation (#8488)
@astrojuanlu: New Read the Docs tutorial, part II (#8473)
Version 5.23.5
- Date:
September 14, 2021
@humitos: Organization: only mark artifacts cleaned as False if they are True (#8481)
@astrojuanlu: Fix link to version states documentation (#8475)
@pzhlkj6612: Docs: update the links to the dependency management content of setuptools docs (#8470)
@stsewd: Permissions: avoid using project.users, use proper permissions instead (#8458)
@astrojuanlu: New Read the Docs tutorial, part I (#8428)
Version 5.23.4
- Date:
September 07, 2021
@pzhlkj6612: Docs: update the links to the dependency management content of setuptools docs (#8470)
@stsewd: Permissions: avoid using project.users, use proper permissions instead (#8458)
@stsewd: Add templatetag to filter by admin projects (#8456)
@stsewd: Support form: don’t allow to change the email (#8455)
@stsewd: Search: show only results from the current role_name being filtered (#8454)
@pyup-bot: pyup: Scheduled weekly dependency update for week 35 (#8451)
@stsewd: API v3 (subprojects): filter by correct owner/organization (#8446)
@astrojuanlu: Rework Team page (#8441)
@mforbes: Added note about how to use Anaconda Project. (#8436)
@stsewd: Contact users: pass user and domain in the context (#8430)
@astrojuanlu: New Read the Docs tutorial, part I (#8428)
@stsewd: API: fix subprojects creation when organizaions are enabled (#8393)
@stsewd: QuerySets: filter permissions by organizations (#8298)
Version 5.23.3
- Date:
August 30, 2021
Version 5.23.2
- Date:
August 24, 2021
@astrojuanlu: Add MyST (Markdown) examples to “cross referencing with Sphinx” guide (#8437)
@saadmk11: Added Search and Filters for
RemoteRepository
andRemoteOrganization
admin list page (#8431)@agjohnson: Try out codeowners (#8429)
@humitos: Proxito: do not log response header for each custom domain request (#8427)
@stsewd: Allow cookies from cross site requests to avoid problems with iframes (#8422)
@ericholscher: Don’t filter on large items in the auditing sidebar. (#8417)
@astrojuanlu: Fix YAML extension (#8416)
@ericholscher: Release 5.23.1 (#8415)
@stsewd: Audit: attach project from the request if available (#8414)
@pyup-bot: pyup: Scheduled weekly dependency update for week 33 (#8411)
@cclauss: Fix typos discovered by codespell in /docs (#8409)
@stsewd: Support: update contact information via Front webhook (#8406)
@stsewd: Allow users to remove themselves from a project (#8384)
Version 5.23.1
- Date:
August 16, 2021
@cclauss: Fix typos discovered by codespell in /docs (#8409)
@ericholscher: Add CSP header to the domain options (#8388)
Version 5.23.0
- Date:
August 09, 2021
@ericholscher: Only call analytics tracking of flyout when analytics are enabled (#8398)
@pyup-bot: pyup: Scheduled weekly dependency update for week 31 (#8385)
@DetectedStorm: Update LICENSE (#5125)
Version 5.22.0
- Date:
August 02, 2021
@pzhlkj6612: Docs: fix typo in versions.rst: -> need (#8383)
@ericholscher: Remove clickjacking middleware for proxito (#8378)
@humitos: Add support for Python3.10 on
testing
Docker image (#8328)@stsewd: Analytics: don’t fail if the page was created in another request (#8310)
Version 5.21.0
- Date:
July 27, 2021
@ericholscher: Build out the MyST section of the getting started (#8371)
@astrojuanlu: Update common (#8368)
@astrojuanlu: Redirect users to appropriate support channels using template chooser (#8366)
@humitos: Proxito: return user-defined HTTP headers on custom domains (#8360)
@ericholscher: Release 5.20.3 (#8356)
@stsewd: Track model changes with django-simple-history (#8355)
Version 5.20.3
- Date:
July 19, 2021
Version 5.20.2
- Date:
July 13, 2021
@humitos: psycopg2: pin to a compatible version with Django 2.2 (#8335)
@stsewd: Contact owners: use correct organization to filter (#8325)
@pyup-bot: pyup: Scheduled weekly dependency update for week 27 (#8317)
@mongolsteppe: Fixing minor error (#8313)
@The-Compiler: Add link to redirect docs (#8308)
@ericholscher: Add docs about setting up permissions for GH apps & orgs (#8305)
@stsewd: Slugify: don’t generate slugs with trailing
-
(#8302)@ericholscher: Increase guide depth (#8300)
@humitos: PR build status: re-try up to 3 times if it fails for some reason (#8296)
@SethFalco: feat: add json schema (#8294)
@pyup-bot: pyup: Scheduled weekly dependency update for week 26 (#8293)
@stsewd: Organizations: validate that a correct slug is generated (#8292)
@astrojuanlu: Add new guide about Jupyter in Sphinx (#8283)
@humitos: oauth webhook: check the
Project
has aRemoteRepository
(#8282)@stsewd: Allow to email users from a management command (#8243)
@astrojuanlu: Add proposal for new Sphinx and RTD tutorials (#8106)
@stsewd: Allow to change the privacy level of external versions (#7825)
Version 5.20.1
- Date:
June 28, 2021
@stsewd: Organizations: validate that a correct slug is generated (#8292)
@humitos: oauth webhook: check the
Project
has aRemoteRepository
(#8282)@stsewd: Search: ask for confirmation when running reindex_elasticsearch (#8275)
@saadmk11: Hit Elasticsearch only once for each search query through the APIv2 (#8228)
@astrojuanlu: Add proposal for new Sphinx and RTD tutorials (#8106)
Version 5.20.0
- Date:
June 22, 2021
@humitos: Migration: fix RemoteRepository - Project data migration (#8271)
@ericholscher: Release 5.19.0 (#8266)
@humitos: Sync RemoteRepository for external collaborators (#8265)
@pyup-bot: pyup: Scheduled weekly dependency update for week 24 (#8262)
@humitos: Make
Project -> ForeignKey -> RemoteRepository
(#8259)@agjohnson: Add basic security policy (#8254)
Version 5.19.0
Warning
This release contains a security fix to our CSRF settings: https://github.com/readthedocs/readthedocs.org/security/advisories/GHSA-3v5m-qmm9-3c6c
- Date:
June 15, 2021
@ericholscher: Remove video from our Sphinx quickstart. (#8246)
@ericholscher: Remove “Markdown” from Mkdocs title (#8245)
@astrojuanlu: Make sustainability page more visible (#8244)
@stsewd: Builds: move send_build_status to builds/tasks.py (#8241)
@ericholscher: Don’t do any CORS checking on Embed API requests (#8226)
@agjohnson: Add project/build filters (#8142)
@humitos: Sign Up: limit the providers allowed to sign up (#8062)
@stsewd: Search: use multi-fields for Wildcard queries (#7613)
@ericholscher: Add ability to rebuild a specific build (#6995)
Version 5.18.0
- Date:
June 08, 2021
@ericholscher: Backport manual indexes (#8235)
@ericholscher: Clean up SSO docs (#8233)
@ericholscher: Don’t do any CORS checking on Embed API requests (#8226)
@agjohnson: Update gitter channel name (#8217)
@ericholscher: Remove IRC from our docs (#8216)
@pyup-bot: pyup: Scheduled weekly dependency update for week 21 (#8206)
@akien-mga: Docs: Add section about deleting downloadable content (#8162)
@stsewd: Search: little optimization when saving search queries (#8132)
@akien-mga: Docs: Add some details to the User Defined Redirects (#7894)
@agjohnson: Add APIv3 version edit URL (#7594)
@saadmk11: Add List API Endpoint for
RemoteRepository
andRemoteOrganization
(#7510)
Version 5.17.0
- Date:
May 24, 2021
@stsewd: Proxito: don’t require the middleware for proxied apis (#8203)
@ericholscher: Remove specific name from security page at user request (#8195)
@humitos: Docker: remove
volumes=
argument when creating the container (#8194)@stsewd: API v2: allow listing when using private repos (#8192)
@stsewd: Proxito: redirect to main project from subprojects (#8187)
@pyup-bot: pyup: Scheduled weekly dependency update for week 20 (#8186)
@agjohnson: Add DPA to legal docs in documentation (#8130)
Version 5.16.0
- Date:
May 18, 2021
@stsewd: QuerySets: check for .is_superuser instead of has_perm (#8181)
@humitos: Build: use
is_active
method to know if the build should be skipped (#8179)@stsewd: Project: use IntegerField for
remote_repository
from project form. (#8176)@stsewd: Docs: remove some lies from cross referencing guide (#8173)
@pyup-bot: pyup: Scheduled weekly dependency update for week 19 (#8170)
@stsewd: Querysets: include organizations in is_active check (#8163)
@davidfischer: Disable FLOC by introducing permissions policy header (#8145)
Version 5.15.0
- Date:
May 10, 2021
@stsewd: Ads: don’t load script if a project is marked as ad_free (#8164)
@stsewd: Querysets: include organizations in is_active check (#8163)
@pyup-bot: pyup: Scheduled weekly dependency update for week 18 (#8153)
@stsewd: Search: default to search on default version of subprojects (#8148)
@humitos: Metrics: run metrics task every 30 minutes (#8138)
@humitos: web-celery: add logging for OOM debug on suspicious tasks (#8131)
@agjohnson: Fix a few style and grammar issues with SSO docs (#8109)
@stsewd: Embed: don’t fail while querying sections with bad id (#8084)
@stsewd: Design doc: allow to install packages using apt (#8060)
Version 5.14.3
- Date:
April 26, 2021
@humitos: Metrics: run metrics task every 30 minutes (#8138)
@humitos: web-celery: add logging for OOM debug on suspicious tasks (#8131)
@stsewd: Celery router: check all
n
last builds for Conda (#8129)@jonels-msft: Include aria-label in flyout search box (#8127)
@stsewd: BuildCommand: don’t leak stacktrace to the user (#8121)
@stsewd: API (v2): use empty list in serializer’s exclude (#8120)
@astrojuanlu: Miscellaneous doc improvements (#8118)
@pyup-bot: pyup: Scheduled weekly dependency update for week 16 (#8117)
@agjohnson: Fix a few style and grammar issues with SSO docs (#8109)
Version 5.14.2
- Date:
April 20, 2021
@stsewd: Sync versions: don’t fetch/return all versions (#8114)
@astrojuanlu: Improve contributing docs, take 2 (#8113)
@Harmon758: Docs: fix typo in config-file/v2.rst (#8102)
@cocobennett: Improve documentation on contributing to documentation (#8082)
Version 5.14.1
- Date:
April 13, 2021
@cocobennett: Add page and page_size to server side api documentation (#8080)
@stsewd: Version warning banner: inject on role=”main” or main tag (#8079)
@stsewd: Conda: protect against None when appending core requirements (#8077)
@humitos: SSO: add small paragraph mentioning how to enable it on commercial (#8063)
@agjohnson: Add separate version create view and create view URL (#7595)
Version 5.14.0
- Date:
April 06, 2021
This release includes a security update which was done in a private branch PR. See our security changelog for more details.
@pyup-bot: pyup: Scheduled weekly dependency update for week 14 (#8071)
@astrojuanlu: Clarify ad-free conditions (#8064)
@humitos: SSO: add small paragraph mentioning how to enable it on commercial (#8063)
@stsewd: Build environment: allow to run commands with a custom user (#8058)
@humitos: Design document for new Docker images structure (#7566)
Version 5.13.0
- Date:
March 30, 2021
@ericholscher: Fix proxito slash redirect for leading slash (#8044)
@pyup-bot: pyup: Scheduled weekly dependency update for week 12 (#8038)
@flying-sheep: Add publicly visible env vars (#7891)
Version 5.12.2
- Date:
March 23, 2021
@ericholscher: Standardize footerjs code (#8032)
@stsewd: Search: don’t leak data for projects with this feature disabled (#8029)
@ericholscher: Canonicalize all proxito slashes (#8028)
@ericholscher: Make pageviews analytics show top 25 pages (#8027)
@ericholscher: Add CSV header data for search analytics (#8026)
@humitos: Use
RemoteRepository
relation to match already imported projects (#8024)@stsewd: Builds: restart build commands before a new build (#7999)
@saadmk11: Remote Repository and Remote Organization Normalization (#7949)
Version 5.12.1
- Date:
March 16, 2021
@pyup-bot: pyup: Scheduled weekly dependency update for week 11 (#8019)
@stsewd: Embed: Allow to override embed view for proxied use (#8018)
@humitos: RemoteRepository: Improvements to
sync_vcs_data.py
script (#8017)@davidfischer: Fix AWS image so it looks sharp (#8009)
@humitos: Stripe Checkout: handle duplicated wehbook (#8002)
@saadmk11: Add __str__ to RemoteRepositoryRelation and RemoteOrganizationRelation and Use raw_id_fields in Admin (#8001)
@saadmk11: Remove duplicate results from RemoteOrganization API (#8000)
@ericholscher: Make SupportView login_required (#7997)
@ericholscher: Release 5.12.0 (#7996)
@pyup-bot: pyup: Scheduled weekly dependency update for week 10 (#7995)
@saadmk11: Remove json field from RemoteRepositoryRelation and RemoteOrganizationRelation model (#7993)
@humitos: Use independent Docker image to build assets (#7992)
@Pradhvan: Fixes typo in getting-started-with-sphinx: (#7991)
@humitos: Allow
donate
app to use Stripe Checkout for one-time donations (#7983)@ericholscher: Add proxito healthcheck (#7948)
@Pradhvan: Docs: Adds Myst to the getting started with sphinx (#7938)
Version 5.12.0
- Date:
March 08, 2021
@pyup-bot: pyup: Scheduled weekly dependency update for week 10 (#7995)
@saadmk11: Remove json field from RemoteRepositoryRelation and RemoteOrganizationRelation model (#7993)
@humitos: Use independent Docker image to build assets (#7992)
@Pradhvan: Fixes typo in getting-started-with-sphinx: (#7991)
@stsewd: Search: use doctype from indexed pages instead of the db (#7984)
@humitos: Allow
donate
app to use Stripe Checkout for one-time donations (#7983)@stsewd: Docs: update expand_tabs to work with the latest version of sphinx-tabs (#7979)
@ericholscher: Fix build routing (#7978)
@stsewd: Builds: register tasks to delete inactive external versions (#7975)
@ericholscher: refactor footer, add jobs & status page (#7970)
@humitos: Upgrade
postgres-client
to v12 in Docker image (#7967)@saadmk11: Add management command to Load Project and RemoteRepository Relationship from JSON file (#7966)
@astrojuanlu: Update guide on conda support (#7965)
@stsewd: Search: make –queue required for management command (#7952)
@ericholscher: Add proxito healthcheck (#7948)
@Pradhvan: Docs: Adds Myst to the getting started with sphinx (#7938)
@ericholscher: Add a support form to the website (#7929)
@stsewd: Install latest mkdocs by default as we do with sphinx (#7869)
Version 5.11.0
- Date:
March 02, 2021
@saadmk11: Add management command to Load Project and RemoteRepository Relationship from JSON file (#7966)
@saadmk11: Add Management Command to Dump Project and RemoteRepository Relationship in JSON format (#7957)
@davidfischer: Enable the cached template loader (#7953)
@FatGrizzly: Added warnings for previous gitbook users (#7945)
@ericholscher: Change our sponsored hosting from Azure -> AWS. (#7940)
@Pradhvan: Docs: Adds Myst to the getting started with sphinx (#7938)
@ericholscher: Add a support form to the website (#7929)
@fabianmp: Allow to use a different url for intersphinx object file download (#7807)
Version 5.10.0
- Date:
February 23, 2021
@pyup-bot: pyup: Scheduled weekly dependency update for week 08 (#7941)
@PawelBorkar: Update license (#7934)
@humitos: Route external versions to the queue were default version was built (#7933)
@humitos: Pin jedi dependency to avoid breaking ipython (#7932)
@humitos: Use
admin
user for SLUMBER API on local environment (#7925)@pyup-bot: pyup: Scheduled weekly dependency update for week 07 (#7913)
@humitos: Router PRs builds to last queue where a build was executed (#7912)
@stsewd: Search: improve re-index management command (#7904)
@stsewd: Search: link to main project in subproject results (#7880)
@humitos: Upgrade Celery and friends to latest versions (#7786)
Version 5.9.0
- Date:
February 16, 2021
Last Friday we migrated our site from Azure to AWS (read the blog post). This is the first release into our new AWS infra.
@humitos: Router PRs builds to last queue where a build was executed (#7912)
@davidfischer: Make storage classes into module level vars (#7908)
@pyup-bot: pyup: Scheduled weekly dependency update for week 06 (#7896)
@nedbat: Doc fix: two endpoints had ‘pip’ for the project_slug (#7895)
@stsewd: Set storage for BuildCommand and BuildEnvironment as private (#7893)
@pyup-bot: pyup: Scheduled weekly dependency update for week 05 (#7887)
@humitos: Add support for Python 3.9 on “testing” Docker image (#7885)
@pyup-bot: pyup: Scheduled weekly dependency update for week 04 (#7867)
@humitos: Log Stripe errors when trying to delete customer/subscription (#7853)
@humitos: Save builder when the build is concurrency limited (#7851)
@pyup-bot: pyup: Scheduled weekly dependency update for week 03 (#7840)
@humitos: Speed up concurrent builds by limited to 5 hours ago (#7839)
@saadmk11: Add Option to Enable External Builds Through Project Update API (#7834)
@stsewd: Docs: mention the version warning is for sphinx only (#7832)
@agjohnson: Hide design docs from documentation (#7826)
@stsewd: Update docs about preview from pull/merge requests (#7823)
@humitos: Register MetricsTask to send metrics to AWS CloudWatch (#7817)
@humitos: Use S3 (MinIO emulator) as storage backend (#7812)
@zachdeibert: Cloudflare to Cloudflare CNAME Records (#7801)
@humitos: Documentation for
/organizations/
endpoint in commercial (#7800)@stsewd: Privacy Levels: migrate protected projects to private (#7608)
@pawamoy: Don’t lose python/name tags values in mkdocs.yml (#7507)
Version 5.8.5
- Date:
January 18, 2021
@pyup-bot: pyup: Scheduled weekly dependency update for week 03 (#7840)
@humitos: Speed up concurrent builds by limited to 5 hours ago (#7839)
@saadmk11: Add Option to Enable External Builds Through Project Update API (#7834)
@stsewd: Docs: mention the version warning is for sphinx only (#7832)
@stsewd: PR preview: pass PR and build urls to sphinx context (#7828)
@agjohnson: Hide design docs from documentation (#7826)
@humitos: Log Stripe Resource fallback creation in Sentry (#7820)
@humitos: Register MetricsTask to send metrics to AWS CloudWatch (#7817)
@saadmk11: Add management command to Sync RemoteRepositories and RemoteOrganizations (#7803)
Version 5.8.4
- Date:
January 12, 2021
Version 5.8.3
- Date:
January 05, 2021
@humitos: Change query on
send_build_status
task for compatibility with .com (#7797)@ericholscher: Update build concurrency numbers for Business (#7794)
@pyup-bot: pyup: Scheduled weekly dependency update for week 01 (#7793)
@timgates42: docs: fix simple typo, -> translations (#7781)
@ericholscher: Release 5.8.2 (#7776)
@humitos: Use Python3.7 on conda base environment when using mamba (#7773)
@ericholscher: Migrate sync_versions from an API call to a task (#7548)
@humitos: Design document for RemoteRepository DB normalization (#7169)
Version 5.8.2
- Date:
December 21, 2020
@humitos: Use Python3.7 on conda base environment when using mamba (#7773)
@humitos: Register StopBuilder task to be executed by builders (#7759)
@stsewd: Search: use alias to link to search results of subprojects (#7757)
@saadmk11: Set The Right Permissions on GitLab OAuth RemoteRepository (#7753)
@fabianmp: Allow to add additional binds to Docker build container (#7684)
Version 5.8.1
- Date:
December 14, 2020
@saadmk11: Use “path_with_namespace” for GitLab RemoteRepository full_name Field (#7746)
@stsewd: Version sync: exclude external versions when deleting (#7742)
@stsewd: Search: limit number of sections and domains to 10K (#7741)
@stsewd: Traffic analytics: don’t pass context if the feature isn’t enabled (#7740)
@stsewd: Analytics: move page views to its own endpoint (#7739)
@stsewd: FeatureQuerySet: make check for date inclusive (#7737)
@saadmk11: Use remote_id and vcs_provider Instead of full_name to Get RemoteRepository (#7734)
@pyup-bot: pyup: Scheduled weekly dependency update for week 49 (#7730)
@saadmk11: Update parts of code that were using the old RemoteRepository model fields (#7728)
@stsewd: Builds: don’t delete them when a version is deleted (#7679)
@humitos: Use
mamba
under a feature flag to create conda environments (#6815)
Version 5.8.0
- Date:
December 08, 2020
@stsewd: Search: use with_positions_offsets term vector for some fields (#7724)
@stsewd: Search: filter only active and built versions from subprojects (#7723)
@stsewd: Extra features: allow to display them conditionally (#7715)
@humitos: Define
pre/post_collectstatic
signals and send them (#7701)@davidfischer: Support the new Google analytics gtag.js (#7691)
@stsewd: External versions: delete after 3 months of being merged/closed (#7678)
@stsewd: Automation Rules: keep history of recent matches (#7658)
Version 5.7.0
- Date:
December 01, 2020
@davidfischer: Ensure there is space for sidebar ads (#7716)
@humitos: Install six as core requirement for builds (#7710)
@ericholscher: Release 5.6.1 (#7695)
@stsewd: Sync versions: use stable version instead of querying all versions (#7380)
Version 5.6.5
- Date:
November 23, 2020
@stsewd: Tests: mock update_docs_task to speed up tests (#7677)
@stsewd: Tests: create an organization when running in .com (#7673)
@davidfischer: Speed up the tag index page (#7671)
@davidfischer: Fix for out of order script loading (#7670)
@davidfischer: Set ad configuration values if using explicit placement (#7669)
@pyup-bot: pyup: Scheduled weekly dependency update for week 46 (#7668)
@stsewd: Tests: mock trigger build to speed up tests (#7661)
@stsewd: Remote repository: save and set default_branch (#7646)
@stsewd: Search: exclude some fields from source results (#7640)
@stsewd: Search: allow to search on different versions of subprojects (#7634)
@saadmk11: Add Initial Modeling with Through Model and Data Migration for RemoteRepository Model (#7536)
Version 5.6.4
- Date:
November 16, 2020
@davidfischer: Fix for out of order script loading (#7670)
@davidfischer: Set ad configuration values if using explicit placement (#7669)
@pyup-bot: pyup: Scheduled weekly dependency update for week 46 (#7668)
@pyup-bot: pyup: Scheduled weekly dependency update for week 45 (#7655)
@stsewd: Automation rules: add delete version action (#7644)
@stsewd: Search: exclude some fields from source results (#7640)
@saadmk11: Add Initial Modeling with Through Model and Data Migration for RemoteRepository Model (#7536)
Version 5.6.3
- Date:
November 10, 2020
Version 5.6.2
- Date:
November 03, 2020
@davidfischer: Display sidebar ad when scrolled (#7621)
@humitos: Catch
requests.exceptions.ReadTimeout
when removing container (#7617)@humitos: Allow search and filter in Django Admin for Message model (#7615)
@stsewd: Search: respect feature flag in dashboard search (#7611)
@ericholscher: Release 5.6.1 (#7604)
Version 5.6.1
- Date:
October 26, 2020
@agjohnson: Bump common to include docker task changes (#7597)
@agjohnson: Default to sphinx theme 0.5.0 when defaulting to latest sphinx (#7596)
@humitos: Use correct Cache-Tag (CDN) and X-RTD-Project header on subprojects (#7593)
@davidfischer: Ads JS hotfix (#7586)
@agjohnson: Add remoterepo query param (#7580)
@agjohnson: Undeprecate APIv2 in docs (#7579)
@agjohnson: Add settings and docker configuration for working with new theme (#7578)
@humitos: Add our
readthedocs_processor
data to our notifications (#7565)@stsewd: Builds: always install latest version of our sphinx extension (#7542)
@ericholscher: Add future default true to Feature flags (#7524)
@stsewd: Add feature flag to not install the latest version of pip (#7522)
@davidfischer: No longer proxy RTD ads through RTD servers (#7506)
Version 5.6.0
- Date:
October 19, 2020
@stsewd: Docs: show example of a requirements.txt file (#7563)
@pyup-bot: pyup: Scheduled weekly dependency update for week 40 (#7537)
@ericholscher: Add future default true to Feature flags (#7524)
@davidfischer: No longer proxy RTD ads through RTD servers (#7506)
@davidfischer: Allow projects to opt-out of analytics (#7175)
Version 5.5.3
- Date:
October 13, 2020
@ericholscher: Add a reference to the Import guide at the start of Getting started (#7547)
Version 5.5.2
- Date:
October 06, 2020
@stsewd: Domain: show created/modified date in admin (#7517)
@ericholscher: Revert “New docker image for builders: 8.0” (#7514)
@srijan-deepsource: Fix some code quality issues (#7494)
Version 5.5.1
- Date:
September 28, 2020
Version 5.5.0
- Date:
September 22, 2020
Version 5.4.3
- Date:
September 15, 2020
Version 5.4.2
- Date:
September 09, 2020
@humitos: Show “Connected Services” form errors to the user (#7469)
@humitos: Allow to extend OrganizationTeamBasicForm from -corporate (#7467)
@pyup-bot: pyup: Scheduled weekly dependency update for week 36 (#7465)
@stsewd: Remote repository: filter by account before deleting (#7454)
@humitos: Truncate the beginning of the commands’ output (#7449)
@davidfischer: Update links to advertising (#7443)
@humitos: Grab the correct name of RemoteOrganization to use in the query (#7430)
@pyup-bot: pyup: Scheduled weekly dependency update for week 35 (#7423)
@humitos: Mark a build as DUPLICATED (same version) only it’s close in time (#7420)
Version 5.4.1
- Date:
September 01, 2020
@bmorrison4: Fix typo in docs/guides/adding-custom-css.rst (#7424)
@stsewd: Docker: install requirements from local changes (#7409)
@pyup-bot: pyup: Scheduled weekly dependency update for week 34 (#7406)
@saadmk11: build_url added to all API v3 build endpoints (#7373)
@humitos: Auto-join email users field for Team model (#7328)
@humitos: Sync RemoteRepository and RemoteOrganization in all VCS providers (#7310)
@stsewd: Page views: use origin URL instead of page name (#7293)
Version 5.4.0
- Date:
August 25, 2020
Version 5.3.0
- Date:
August 18, 2020
@humitos: Remove the comma added in logs that breaks grep parsing (#7393)
@stsewd: GitLab webhook: don’t fail on invalid payload (#7391)
@stsewd: External providers: better logging for GitLab (#7385)
@stsewd: Sync versions: little optimization when deleting versions (#7367)
@agjohnson: Add feature flag to just skip the sync version task entirely (#7366)
@agjohnson: Convert zip to list for templates (#7359)
Version 5.2.3
- Date:
August 04, 2020
@davidfischer: Add a middleware for referrer policy (#7346)
@stsewd: Footer: don’t show the version warning for external version (#7340)
@ericholscher: Lower rank for custom install docs. (#7339)
@benjaoming: Argument list for “python -m virtualenv” without empty strings (#7330)
@stsewd: Docs: little improvements on getting start docs (#7316)
@stsewd: Docs: make it more clear search on subprojects (#7272)
Version 5.2.2
- Date:
July 29, 2020
@agjohnson: Reduce robots.txt cache TTL (#7334)
@davidfischer: Use the privacy embed for YouTube (#7320)
@DougCal: re-worded text on top of “Import a Repository” (#7318)
@stsewd: Docs: make it clear the config file options are per version (#7314)
@humitos: Feature to disable auto-generated index.md/README.rst files (#7305)
@humitos: Enable SessionAuthentication on APIv3 endpoints (#7295)
@pyup-bot: pyup: Scheduled weekly dependency update for week 28 (#7287)
@humitos: Make “homepage” optional when updating a project (#7286)
@humitos: Allow users to set hidden on versions via APIv3 (#7285)
@humitos: Documentation for Single Sign-On feature on commercial (#7212)
Version 5.2.1
- Date:
July 14, 2020
@davidfischer: Fix a case where “tags” is interpreted as a project slug (#7284)
@agjohnson: Fix versions (#7271)
@saadmk11: Automation rule to make versions hidden added (#7265)
@stsewd: Sphinx: add –keep-going when fail_on_warning is true (#7251)
@saadmk11: Don’t allow Domain name matching production domain to be created (#7244)
@humitos: Documentation for Single Sign-On feature on commercial (#7212)
Version 5.2.0
- Date:
July 07, 2020
Version 5.1.5
- Date:
July 01, 2020
@choldgraf: cross-linking build limitations for pr builds (#7248)
@humitos: Allow to extend Import Project page from corporate (#7234)
@humitos: Make RemoteRepository.full_name db_index=True (#7231)
@ericholscher: Re-add the rst filter that got removed (#7223)
Version 5.1.4
- Date:
June 23, 2020
@stsewd: Search: index from html files for mkdocs projects (#7208)
@humitos: Use total_memory to calculate “time” Docker limit (#7203)
@davidfischer: Feature flag for using latest Sphinx (#7201)
@ericholscher: Mention that we don’t index search in PR builds (#7199)
@davidfischer: Add a feature flag to use latest RTD Sphinx ext (#7198)
@ericholscher: Release 5.1.3 (#7197)
@agjohnson: Use theme release 0.5.0rc1 for docs (#7037)
@humitos: Skip promoting new stable if current stable is not
machine=True
(#6695)
Version 5.1.3
- Date:
June 16, 2020
@davidfischer: Fix the project migration conflict (#7196)
@ericholscher: Document the fact that PR builds are now enabled on .org (#7187)
@ericholscher: Update sharing examples (#7179)
@davidfischer: Allow projects to opt-out of analytics (#7175)
@stsewd: Docs: install readthedocs-sphinx-search from pypi (#7174)
@ericholscher: Reduce logging in proxito middleware so it isn’t in Sentry (#7172)
@ericholscher: Release 5.1.2 (#7171)
@humitos: Use
CharField.choices
forBuild.status_code
(#7166)@davidfischer: Store pageviews via signals, not tasks (#7106)
Version 5.1.2
- Date:
June 09, 2020
@humitos: Use
CharField.choices
forBuild.status_code
(#7166)@ericholscher: Reindex search on the
reindex
queue (#7161)@stsewd: Project search: Show original description when there isn’t highlight (#7160)
@ericholscher: Fix custom URLConf redirects (#7155)
@ericholscher: Allow
blank=True
for URLConf (#7153)@stsewd: Project: make external_builds_enabled not null (#7144)
@saadmk11: Do not Pre-populate username field for account delete (#7143)
@davidfischer: Add feature flag to use the stock Sphinx builders (#7141)
@ericholscher: Move changes_files to before search indexing (#7138)
@stsewd: Proxito middleware: reset to original urlconf after request (#7137)
@ericholscher: Revert “Merge pull request #7101 from readthedocs/show-last-total” (#7133)
@ericholscher: Release 5.1.1 (#7129)
@humitos: Use “-j auto” on sphinx-build command to build in parallel (#7128)
@stsewd: Search: refactor API to not emulate a Django queryset (#7114)
@davidfischer: Store pageviews via signals, not tasks (#7106)
@stsewd: Search: don’t index line numbers from code blocks (#7104)
@ericholscher: Add a project-level configuration for PR builds (#7090)
@pyup-bot: pyup: Scheduled weekly dependency update for week 18 (#7012)
@stsewd: Allow to enable server side search for MkDocs (#6986)
@ericholscher: Add ability for users to set their own URLConf (#6963)
Version 5.1.1
- Date:
May 26, 2020
@humitos: Add a tip in EmbedAPI to use Sphinx reference in section (#7099)
@ericholscher: Release 5.1.0 (#7098)
@ericholscher: Add a setting for storing pageviews (#7097)
@ericholscher: Fix the unresolver not working properly with root paths (#7093)
@ericholscher: Add a project-level configuration for PR builds (#7090)
@santos22: Fix tests ahead of django-dynamic-fixture update (#7073)
@ericholscher: Add ability for users to set their own URLConf (#6963)
@dojutsu-user: Store Pageviews in DB (#6121)
Version 5.1.0
- Date:
May 19, 2020
This release includes one major new feature which is Pageview Analytics. This allows projects to see the pages in their docs that have been viewed in the past 30 days, giving them an idea of what pages to focus on when updating them.
This release also has a few small search improvements, doc updates, and other bugfixes as well.
@ericholscher: Add a setting for storing pageviews (#7097)
@ericholscher: Fix the unresolver not working properly with root paths (#7093)
@davidfischer: Document HSTS support (#7083)
@davidfischer: Canonical/HTTPS redirect fix (#7075)
@santos22: Fix tests ahead of django-dynamic-fixture update (#7073)
@stsewd: Sphinx Search: don’t skip indexing if one file fails (#7071)
@stsewd: Search: generate full link from the server side (#7070)
@ericholscher: Fix PR builds being marked built (#7069)
@ericholscher: Add a page about choosing between .com/.org (#7068)
@ericholscher: Release 5.0.0 (#7064)
@ericholscher: Docs: Refactor and simplify our docs (#7052)
@stsewd: Search Document: remove unused class methods (#7035)
@stsewd: RTDFacetedSearch: pass filters in one way only (#7032)
@dojutsu-user: Store Pageviews in DB (#6121)
Version 5.0.0
- Date:
May 12, 2020
This release includes two large changes, one that is breaking and requires a major version upgrade:
We have removed our deprecated doc serving code that used
core/views
,core/symlinks
, andbuilds/syncers
(#6535). All doc serving should now be done viaproxito
. In production this has been the case for over a month, we have now removed the deprecated code from the codebase.We did a large documentation refactor that should make things nicer to read and highlights more of our existing features. This is the first of a series of new documentation additions we have planned
@ericholscher: Fix the caching of featured projects (#7054)
@ericholscher: Docs: Refactor and simplify our docs (#7052)
@stsewd: Mention using ssh URLs when using private submodules (#7046)
@ericholscher: Show project slug in Version admin (#7042)
@agjohnson: Use a high time limit for celery build task (#7029)
@ericholscher: Clean up build admin to make list display match search (#7028)
@agjohnson: Move docker limits back to setting (#7023)
@ericholscher: Release 4.1.8 (#7020)
@ericholscher: Cleanup unresolver logging (#7019)
@stsewd: Document about next when using a secret link (#7015)
@stsewd: Remove unused field project.version_privacy_level (#7011)
@ericholscher: Add proxito headers to redirect responses (#7007)
@humitos: Show a list of packages installed on environment (#6992)
@eric-wieser: Ensure invoked Sphinx matches importable one (#6965)
@ericholscher: Add an unresolver similar to our resolver (#6944)
@KengoTODA: Replace “PROJECT” with project object (#6878)
@humitos: Remove code replaced by El Proxito and stateless servers (#6535)
Version 4.1.8
- Date:
May 05, 2020
This release adds a few new features and bugfixes.
The largest change is the addition of hidden
versions,
which allows docs to be built but not shown to users on the site.
This will keep old links from breaking but not direct new users there.
We’ve also expanded the CDN support to make sure we’re passing headers on 3xx and 4xx responses. This will allow us to expand the timeout on our CDN.
We’ve also updated and added a good amount of documentation in this release, and we’re starting a larger refactor of our docs to help users understand the platform better.
@ericholscher: Cleanup unresolver logging (#7019)
@ericholscher: Add CDN to the installed apps (#7014)
@eric-wieser: Emit a better error if no feature flag is found (#7009)
@ericholscher: Add proxito headers to redirect responses (#7007)
@ericholscher: Add Priority 0 to Celery (#7006)
@ericholscher: Start storing JSON data for PR builds (#7001)
@yarikoptic: Add a note if build status is not being reported (#6999)
@davidfischer: Exclusively handle proxito HSTS from the backend (#6994)
@humitos: Mention concurrent builds limitation in “Build Process” (#6993)
@humitos: Show a list of packages installed on environment (#6992)
@ericholscher: Log sync_repository_task when we run it (#6987)
@ericholscher: Remove old SSL cert warning, since they now work. (#6985)
@agjohnson: More fixes for automatic Docker limits (#6982)
@davidfischer: Add details to our changelog for 4.1.7 (#6978)
@ericholscher: Release 4.1.7 (#6976)
@ericholscher: Catch infinite canonical redirects (#6973)
@eric-wieser: Ensure invoked Sphinx matches importable one (#6965)
@ericholscher: Add an unresolver similar to our resolver (#6944)
@humitos: Optimization on
sync_versions
to use ls-remote on Git VCS (#6930)@humitos: Split X-RTD-Version-Method header into two HTTP headers. (#6907)
@stsewd: Allow to override sign in and sign out views (#6901)
Version 4.1.7
- Date:
April 28, 2020
As of this release, most documentation on Read the Docs Community is now behind Cloudflare’s CDN. It should be much faster for people further from US East. Please report any issues you experience with stale cached documentation (especially CSS/JS).
Another change in this release related to how custom domains are handled.
Custom domains will now redirect HTTP -> HTTPS if the Domain’s “HTTPS” flag is set.
Also, the subdomain URL (eg. <project>.readthedocs.io/...
) should redirect to the custom domain
if the Domain’s “canonical” flag is set.
These flags are configurable in your project dashboard under Admin > Domains.
Many of the other changes related to improvements for our infrastructure to allow us to have autoscaling build and web servers. There were bug fixes for projects using versions tied to annotated git tags and custom user redirects will now send query parameters.
@ericholscher: Reduce proxito logging (#6970)
@ericholscher: Fix the trailing slash in our repo regexs (#6956)
@davidfischer: Add canonical to the Domain listview in the admin (#6954)
@davidfischer: Allow setting HSTS on a per domain basis (#6953)
@humitos: Refactor how we handle GitHub webhook events (#6949)
@humitos: Return 400 when importing an already existing project (#6948)
@humitos: Return max_concurrent_builds in ProjectAdminSerializer (#6946)
@tom-doerr: Update year (#6945)
@humitos: Revert “Use requests.head to query storage.exists” (#6941)
@ericholscher: Release 4.1.6 (#6940)
@stsewd: Remove note about search analytics being beta (#6939)
@stsewd: Add troubleshooting section for dev search docs (#6933)
@davidfischer: Index date and ID together on builds (#6926)
@davidfischer: CAA records are not only for users of Cloudflare DNS (#6925)
@davidfischer: Docs on supporting root domains (#6923)
@ericholscher: Add basic support for lower priority PR builds (#6921)
@ericholscher: Change the dashboard search to default to searching files (#6920)
@davidfischer: Canonicalize domains and redirect in proxito (#6905)
@zdover23: Made syntactical improvements and fixed some vocabulary issues. (#6825)
Version 4.1.6
- Date:
April 21, 2020
@humitos: Do not override the domain of Azure Storage (#6928)
@humitos: Per-project concurrency and check before triggering the build (#6927)
@davidfischer: Remove note about underscore in domain (#6924)
@ericholscher: Improve logging around status setting on PR builds (#6912)
@ericholscher: Add hoverxref to our docs (#6911)
@ericholscher: Fix Cache-Tag header name (#6908)
@ericholscher: Include the project slug in the PR context (#6904)
@ericholscher: Fix single version infinite redirect (#6900)
@humitos: Use a custom Task Router to route tasks dynamically (#6849)
@zdover23: Made syntactical improvements and fixed some vocabulary issues. (#6825)
@stsewd: Force to use proxied API for footer and search (#6768)
@ericholscher: Only output debug logging from RTD app (#6717)
@ericholscher: Add ability to sort dashboard by modified date (#6680)
@stsewd: Protection against None when sending notifications (#6610)
Version 4.1.5
- Date:
April 15, 2020
@ericholscher: Fix Cache-Tag header name (#6908)
@ericholscher: Fix single version infinite redirect (#6900)
@ericholscher: Release 4.1.4 (#6899)
@humitos: On Azure .exists blob timeout, log the exception and return False (#6895)
@ericholscher: Fix URLs like
/projects/subproject
from 404ing when they don’t end with a slash (#6888)@ericholscher: Allocate docker limits based on server size. (#6879)
Version 4.1.4
- Date:
April 14, 2020
@humitos: On Azure .exists blob timeout, log the exception and return False (#6895)
@ericholscher: Fix URLs like
/projects/subproject
from 404ing when they don’t end with a slash (#6888)@ericholscher: Add CloudFlare Cache tags support (#6887)
@ericholscher: Allocate docker limits based on server size. (#6879)
@ericholscher: Make the status name in CI configurable via setting (#6877)
@ericholscher: Add 12 hour caching to our robots.txt serving (#6876)
@humitos: Filter triggered builds when checking concurrency (#6875)
@ericholscher: Fix issue with sphinx domain types with
:
in them: (#6874)@stsewd: Make dashboard faster for projects with a lot of subprojects (#6873)
@ericholscher: Release 4.1.3 (#6872)
@stsewd: Don’t do unnecessary queries when listing subprojects (#6869)
@stsewd: Don’t do extra query if the project is a translation (#6865)
@stsewd: Reduce queries to storage to serve 404 pages (#6845)
@stsewd: Add checking the github oauth app in the troubleshooting page (#6827)
@humitos: Return full path URL (including
html
) on/api/v2/docurl/
endpoint (#6082)
Version 4.1.3
- Date:
April 07, 2020
@stsewd: Don’t do unnecessary queries when listing subprojects (#6869)
@stsewd: Don’t do extra query if the project is a translation (#6865)
@ericholscher: Make development docs a bit easier to find (#6861)
@davidfischer: Add an advertising API timeout (#6856)
@humitos: Do not save pip cache when using CACHED_ENVIRONMENT (#6820)
@ericholscher: Denormalize from_url_without_rest onto the redirects model (#6780)
@davidfischer: Developer docs emphasize the Docker setup (#6682)
@davidfischer: Document setting up connected accounts in dev (#6681)
@humitos: Return full path URL (including
html
) on/api/v2/docurl/
endpoint (#6082)
Version 4.1.2
- Date:
March 31, 2020
@humitos: Allow receiving
None
fortemplate_html
when sending emails (#6834)@ericholscher: Fix silly issue with sync_callback (#6830)
@ericholscher: Show the builder in the Build admin (#6826)
@ericholscher: Properly call sync_callback when there aren’t any MULTIPLE_APP_SERVERS settings (#6823)
@stsewd: Allow to override app from where to read templates (#6821)
@humitos: Do not save pip cache when using CACHED_ENVIRONMENT (#6820)
@ericholscher: Release 4.1.1 (#6818)
@humitos: Use watchman when calling
runserver
in local development (#6813)@humitos: Show “Uploading” build state when uploading artifacts into storage (#6810)
@humitos: Update guide about building consuming too much resources (#6778)
Version 4.1.1
- Date:
March 24, 2020
@stsewd: Respect order when serving 404 (version -> default_version) (#6805)
@humitos: Use storage.open API correctly for tar files (build cached envs) (#6799)
@humitos: Check 404 page once when slug and default_version is the same (#6796)
@humitos: Do not reset the build start time when running build env (#6794)
@humitos: Skip .cache directory for cached builds if it does not exist (#6791)
@ericholscher: Remove GET args from the path passed via proxito header (#6790)
@ericholscher: Release 4.1.0 (#6788)
@ericholscher: Revert “Add feature flag to just completely skip sync and symlink operations (#6689)” (#6781)
Version 4.1.0
- Date:
March 17, 2020
@ericholscher: Properly proxy the Proxito headers via nginx/sendfile (#6782)
@ericholscher: Revert “Add feature flag to just completely skip sync and symlink operations (#6689)” (#6781)
@humitos: Upgrade django-storages to support URLs with more http methods (#6771)
@davidfischer: Use the hotfixed version of django-messages-extends (#6767)
@ericholscher: Release 4.0.3 (#6766)
@humitos: Pull/Push cached environment using storage (#6763)
@stsewd: Refactor search view to make use of permission_classes (#6761)
Version 4.0.3
- Date:
March 10, 2020
@stsewd: Refactor search view to make use of permission_classes (#6761)
@ericholscher: Revert “Merge pull request #6739 from readthedocs/agj/docs-tos-pdf” (#6760)
@ericholscher: Expand the logic in our proxito mixin. (#6759)
@comradekingu: Spelling: “Set up your environment” (#6752)
@ericholscher: Release 4.0.2 (#6741)
@agjohnson: Add TOS PDF output (#6739)
@ericholscher: Don’t call virtualenv with
--no-site-packages
(#6738)@GallowayJ: Drop mock dependency (#6723)
@humitos: New block on footer template to override from corporate (#6702)
@humitos: Point users to support email instead asking to open an issue (#6650)
Version 4.0.2
- Date:
March 04, 2020
@ericholscher: Don’t call virtualenv with
--no-site-packages
(#6738)@stsewd: Catch ConnectionError from request on api timing out (#6735)
@ericholscher: Release 4.0.1 (#6733)
@humitos: Improve Proxito 404 handler to render user-facing Maze when needed (#6726)
Version 4.0.1
- Date:
March 03, 2020
@ericholscher: Add feature flag for branch & tag syncing to API. (#6729)
@stsewd: Be explicit on privacy level for search tests (#6713)
@stsewd: Make easy to run search tests in docker compose (#6711)
@davidfischer: Docker settings improvements (#6709)
@davidfischer: Workaround SameSite cookies (#6708)
@davidfischer: Figure out the host IP when using Docker (#6707)
@davidfischer: Pin the version of Azurite for docker-compose development (#6706)
@ericholscher: Release 4.0.0 (#6704)
@humitos: Rename docker settings to fix local environment (#6703)
@sduthil: API v3 doc: fix typos in URL for PATCH /versions/slug/ (#6698)
@humitos: Sort versions in-place to help performance (#6696)
@agjohnson: Add feature flag to just completely skip sync and symlink operations (#6689)
@humitos: Disable more loggings in development environment (#6683)
@davidfischer: Use x-forwarded-host in local docker environment (#6679)
@humitos: Allow user to set
build.image: testing
in the config file (#6676)@agjohnson: Add azurite –loose option (#6669)
@davidfischer: Enable content security policy in report-only mode (#6642)
Version 4.0.0
- Date:
February 25, 2020
This release upgrades our codebase to run on Django 2.2. This is a breaking change, so we have released it as our 4th major version.
@ericholscher: Release 3.12.0 (#6674)
@davidfischer: Show message if version list truncated (#6276)
Version 3.12.0
- Date:
February 18, 2020
This version has two major changes:
It updates our default docker images to stable=5.0 and latest=6.0.
It changes our PR builder domain to
readthedocs.build
@humitos: Use PUBLIC_DOMAIN_USES_HTTPS for resolver tests (#6673)
@ericholscher: Remove old docker settings (#6670)
@ericholscher: Initial attempt to serve PR builds at
readthedocs.build
(#6629)@ericholscher: Remove re-authing of users on downloads. (#6619)
@stsewd: Don’t trigger a sync twice on creation/deletion for GitHub (#6614)
@s-weigand: Add linkcheck test for the docs (#6543)
Version 3.11.6
- Date:
February 04, 2020
@ericholscher: Note we aren’t doing GSOC in 2020 (#6618)
@ericholscher: only serve x-rtd-slug project if it exists (#6617)
@ericholscher: Add check for a single_version project having a version_slug for PR builds (#6615)
@ericholscher: Raise exception when we get an InfiniteRedirect (#6609)
@ericholscher: Release 3.11.5 (#6608)
@humitos: Avoid infinite redirect on El Proxito on 404 (#6606)
@stsewd: Don’t error when killing/removing non-existent container (#6605)
@humitos: Use proper path to download/install readthedocs-ext (#6603)
@stsewd: Don’t assume build isn’t None in a docker build env (#6599)
@ericholscher: Fix issue with pip 20.0 breaking on install (#6598)
@agjohnson: Revert “Update celery requirements to its latest version” (#6596)
@Blackcipher101: Changed documentation of Api v3 (#6574)
@ericholscher: Use our standard auth mixin for proxito downloads (#6572)
@humitos: Move common docker compose configs to common repository (#6539)
Version 3.11.5
- Date:
January 29, 2020
@humitos: Avoid infinite redirect on El Proxito on 404 (#6606)
@humitos: Use proper path to download/install readthedocs-ext (#6603)
@stsewd: Don’t assume build isn’t None in a docker build env (#6599)
@ericholscher: Fix issue with pip 20.0 breaking on install (#6598)
@agjohnson: Revert “Update celery requirements to its latest version” (#6596)
@agjohnson: Release 3.11.4 again (#6594)
@agjohnson: Release 3.11.4 (#6593)
@ericholscher: Use our standard auth mixin for proxito downloads (#6572)
Version 3.11.4
- Date:
January 28, 2020
@humitos: Disable django debug toolbar in El Proxito (#6591)
@humitos: Merge pull request #6588 from readthedocs/humitos/support-ext (#6588)
@ericholscher: Use our standard auth mixin for proxito downloads (#6572)
@ericholscher: Fix /en/latest redirects (#6564)
@stsewd: Merge pull request #6561 from stsewd/move-method (#6561)
@ericholscher: Fix proxito redirects breaking without a / (#6558)
@stsewd: Don’t use an instance of VCS when isn’t needed (#6548)
@saadmk11: Add GitHub OAuth App Permission issue to PR Builder Troubleshooting docs (#6547)
@humitos: Move common docker compose configs to common repository (#6539)
@preetmishra: Update Transifex Integration details in Internationalization page. (#6531)
@Parth1811: Fixes #5388 – Added Documentation for constraint while using Conda (#6509)
@humitos: Show debug toolbar when running docker compose (#6488)
@dibyaaaaax: Add python examples for API v3 Documentation (#6487)
Version 3.11.3
- Date:
January 21, 2020
@ericholscher: Pass proper path to redirect code (#6555)
@Daniel-Mietchen: Fixing a broken link (#6550)
@humitos: Add netcat and telnet for celery debugging with rdb (#6518)
@dibyaaaaax: Add www to the broken link (#6513)
@davidfischer: Don’t allow empty tags (#6512)
@Parth1811: Fixes #6510 – Removed the
show_analytics
checks from the template (#6511)@stsewd: Don’t pass build to environment when doing a sync (#6503)
@ericholscher: Release 3.11.2 (#6502)
@Blackcipher101: Added “dirhtml” target (#6500)
@humitos: Use CELERY_APP_NAME to call the proper celery app (#6499)
@stsewd: Copy path from host only when using a LocalBuildEnviroment (#6482)
@stsewd: Set env variables in the same way for DockerBuildEnvironment and Loc… (#6481)
@stsewd: Use environment variable per run, not per container (#6480)
@humitos: Update celery requirements to its latest version (#6448)
@stsewd: Execute checkout step respecting docker setting (#6436)
@humitos: Serve non-html at documentation domain though El Proxito (#6419)
Version 3.11.2
- Date:
January 08, 2020
@ericholscher: Fix link to my blog post breaking https (#6495)
@humitos: Use a fixed IP for NGINX under docker-compose (#6491)
@humitos: Add ‘index.html’ to the path before using storage.url(path) (#6476)
@agjohnson: Release 3.11.1 (#6473)
@humitos: Use tasks from common (including docker ones) (#6471)
@humitos: Use django storage to build URL returned by El Proxito (#6466)
@ericholscher: Handle GitHub Push events with
deleted: true
in the JSON (#6465)@segevfiner: Remove a stray backtick from import-guide.rst (#6362)
Version 3.11.1
- Date:
December 18, 2019
@humitos: Use django storage to build URL returned by El Proxito (#6466)
@ericholscher: Handle GitHub Push events with
deleted: true
in the JSON (#6465)@ericholscher: Update troubleshooting steps for PR builder (#6463)
@ericholscher: Add DOCKER_NORELOAD to compose settings (#6461)
@keshavvinayak01: Fixed remove_search_analytics issue (#6447)
@saadmk11: Fix logic to build internal/external versions on update_repos management command (#6442)
@humitos: Refactor get_downloads to make one query for default_version (#6441)
@humitos: Do not expose env variables on external versions (#6440)
@humitos: Bring Azure storage backend classes to this repository (#6433)
@stsewd: Show predefined match on automation rules admin (#6432)
@humitos: inv tasks to use when developing with docker (#6418)
@piyushpalawat99: Fix #6395 (#6402)
@ericholscher: Add an “Edit Versions” listing to the Admin menu (#6110)
@saadmk11: Extend webhook notifications with build status (#5621)
Version 3.11.0
- Date:
December 03, 2019
@davidfischer: Use media availability instead of querying the filesystem (#6428)
@stsewd: Remove beta note about sharing by password and header auth (#6426)
@humitos: Use trigger_build for update_repos command (#6422)
@humitos: Add AuthenticationMiddleware to El Proxito tests (#6416)
@humitos: Use WORKDIR to cd into a directory in Dockerfile (#6409)
@humitos: Use /data inside Azurite container to persist data (#6407)
@humitos: Serve non-html files from nginx (X-Accel-Redirect) (#6404)
@humitos: Allow to extend El Proxito views from commercial (#6397)
@humitos: Migrate El Proxito views to class-based views (#6396)
@agjohnson: Fix CSS and how we were handling html in automation rule UI (#6394)
@ericholscher: Release 3.10.0 (#6391)
@ericholscher: Redirect index files in proxito instead of serving (#6387)
@saadmk11: Refactor Subproject validation to use it for Forms and API (#6285)
Version 3.10.0
- Date:
November 19, 2019
@ericholscher: Redirect index files in proxito instead of serving (#6387)
@stsewd: Use github actions to trigger tests in corporate (#6376)
@saadmk11: Show only users projects in the APIv3 browsable form (#6374)
@davidfischer: Pin the node dependencies with a package-lock (#6370)
@ericholscher: Small optimization to not compute the highest version when it isn’t displayed (#6360)
@pyup-bot: pyup: Scheduled weekly dependency update for week 44 (#6347)
@ericholscher: Port additional features to proxito (#6286)
Version 3.9.0
- Date:
November 12, 2019
@davidfischer: Pin the node dependencies with a package-lock (#6370)
@humitos: Force PUBLIC_DOMAIN_USES_HTTPS on version compare tests (#6367)
@segevfiner: Remove a stray backtick from import-guide.rst (#6362)
@stsewd: Don’t compare inactive or non build versions (#6361)
@ericholscher: Change the default of proxied_api_host to api_host (#6355)
@pyup-bot: pyup: Scheduled weekly dependency update for week 43 (#6334)
@KartikKapil: added previous year gsoc projects (#6333)
@stsewd: Remove files from storage and delete indexes from ES when no longer needed (#6323)
@humitos: Revert “Adding RTD prefix for docker only in setting.py and all… (#6315)
@anindyamanna: Fixed Broken links (#6300)
@sciencewhiz: Fix missing word in wipe guide (#6294)
@jaferkhan: Removed unused code from view and template (#6250) (#6288)
@davidfischer: Store version media availability (#6278)
@davidfischer: Link to the terms of service (#6277)
@humitos: Default to None when using the Serializer as Form for Browsable… (#6266)
@ericholscher: Fix inactive version list not showing when no results returned (#6264)
@ericholscher: Downgrade django-storges. (#6263)
@ericholscher: Release 3.8.0 (#6262)
@davidfischer: Allow project badges for private version (#6252)
@saadmk11: Allow only post requests for delete views (#6242)
@Iamshankhadeep: Changing created to modified time (#6234)
@ericholscher: Initial stub of proxito (#6226)
@saadmk11: Add Better error message for lists in config file (#6200)
@dojutsu-user: Optimize json parsing (#6160)
@tapaswenipathak: Added missing i18n for footer api (#6144)
@dojutsu-user: Remove ‘highlight’ URL param from search results (#6087)
@Iamshankhadeep: Adding RTD prefix for docker only in setting.py and all other places where is needed (#6040)
Version 3.8.0
- Date:
October 09, 2019
@davidfischer: Allow project badges for private version (#6252)
@pyup-bot: pyup: Scheduled weekly dependency update for week 40 (#6251)
@humitos: Do not use –cache-dir for pip if CLEAN_AFTER_BUILD is enabled (#6239)
@ericholscher: Initial stub of proxito (#6226)
@davidfischer: Improve the version listview (#6224)
@stsewd: Override production media artifacts on test (#6220)
@davidfischer: Customize default build media storage for the FS (#6215)
@agjohnson: Release 3.7.5 (#6214)
@saadmk11: Only Build Active Versions from Build List Page Form (#6205)
@Iamshankhadeep: moved expandable_fields to meta class (#6198)
@dojutsu-user: Remove pie-chart from search analytics page (#6192)
@humitos: Create subproject relationship via APIv3 endpoint (#6176)
@davidfischer: Add terms of service (#6174)
@davidfischer: Document connected account permissions (#6172)
@dojutsu-user: Optimize json parsing (#6160)
@humitos: APIv3 endpoint: allow to modify a Project once it’s imported (#5952)
Version 3.7.5
- Date:
September 26, 2019
@davidfischer: Remove if storage blocks (#6191)
@davidfischer: Update security docs (#6179)
@davidfischer: Add the private spamfighting module to INSTALLED_APPS (#6177)
@davidfischer: Document connected account permissions (#6172)
@pyup-bot: pyup: Scheduled weekly dependency update for week 36 (#6158)
@saadmk11: Remove PR Builder Project Idea from RTD GSoC Docs (#6147)
@ericholscher: Serialize time in search queries properly (#6142)
@dojutsu-user: Add Search Guide (#6101)
@dojutsu-user: Record search queries smartly (#6088)
@dojutsu-user: Remove ‘highlight’ URL param from search results (#6087)
Version 3.7.4
- Date:
September 05, 2019
@ericholscher: Remove paid support callout (#6140)
@ericholscher: Fix IntegrationAdmin with raw_id_fields for Projects (#6136)
@ericholscher: Fix link to html_extra_path (#6135)
@stsewd: Move out authorization from FooterHTML view (#6133)
@agjohnson: Add setting for always cleaning the build post-build (#6132)
@pyup-bot: pyup: Scheduled weekly dependency update for week 35 (#6129)
@ericholscher: Use raw_id_fields in the TokenAdmin (#6116)
@davidfischer: Fixed footer ads supported on all themes (#6115)
@dojutsu-user: Record search queries smartly (#6088)
@dojutsu-user: Index more domain data into elasticsearch (#5979)
Version 3.7.3
- Date:
August 27, 2019
@davidfischer: Small improvements to the SEO guide (#6105)
@davidfischer: Update intersphinx mapping with canonical sources (#6085)
@davidfischer: Fix lingering 500 issues (#6079)
@davidfischer: Technical docs SEO guide (#6077)
@saadmk11: GitLab Build Status Reporting for PR Builder (#6076)
@davidfischer: Update ad details docs (#6074)
@davidfischer: Gold makes projects ad-free again (#6073)
@saadmk11: Auto Sync and Re-Sync for Manually Created Integrations (#6071)
@pyup-bot: pyup: Scheduled weekly dependency update for week 32 (#6067)
@davidfischer: Send media downloads to analytics (#6063)
@davidfischer: IPv6 in X-Forwarded-For fix (#6062)
@humitos: Remove warning about beta state of conda support (#6056)
@saadmk11: Update GitLab Webhook creating to enable merge request events (#6055)
@ericholscher: Release 3.7.2 (#6054)
@dojutsu-user: Update feature flags docs (#6053)
@saadmk11: Add indelx.html filename to the external doc url (#6051)
@dojutsu-user: Search analytics improvements (#6050)
@stsewd: Sort versions taking into consideration the vcs type (#6049)
@humitos: Avoid returning invalid domain when using USE_SUBDOMAIN=True in dev (#6026)
@dojutsu-user: Search analytics (#6019)
@tapaswenipathak: Remove django-guardian model (#6005)
@stsewd: Add manager and description field to AutomationRule model (#5995)
@davidfischer: Cleanup project tags (#5983)
@davidfischer: Search indexing with storage (#5854)
@wilvk: fix sphinx startup guide to not to fail on rtd build as per #2569 (#5753)
Version 3.7.2
- Date:
August 08, 2019
@dojutsu-user: Update feature flags docs (#6053)
@saadmk11: Add indelx.html filename to the external doc url (#6051)
@dojutsu-user: Search analytics improvements (#6050)
@stsewd: Sort versions taking into consideration the vcs type (#6049)
@ericholscher: When called via SyncRepositoryTaskStep this doesn’t exist (#6048)
@davidfischer: Fix around community ads with an explicit ad placement (#6047)
@ericholscher: Release 3.7.1 (#6045)
@saadmk11: Do not delete media storage files for external version (#6035)
@tapaswenipathak: Remove django-guardian model (#6005)
@davidfischer: Cleanup project tags (#5983)
@davidfischer: Search indexing with storage (#5854)
Version 3.7.1
- Date:
August 07, 2019
@pyup-bot: pyup: Scheduled weekly dependency update for week 31 (#6042)
@agjohnson: Fix issue with save on translation form (#6037)
@saadmk11: Do not delete media storage files for external version (#6035)
@saadmk11: Do not show wipe version message on build details page for External versions (#6034)
@saadmk11: Send site notification on Build status reporting failure and follow DRY (#6033)
@davidfischer: Use Read the Docs for Business everywhere (#6029)
@davidfischer: Remove project count on homepage (#6028)
@ericholscher: Update get_absolute_url for External Versions (#6020)
@dojutsu-user: Search analytics (#6019)
@saadmk11: Fix issues around remote repository for sending Build status reports (#6017)
@ericholscher: Expand the scope between
before_vcs
andafter_vcs
(#6015)@davidfischer: Handle .x in version sorting (#6012)
@tapaswenipathak: Update note (#6008)
@davidfischer: Link to Read the Docs for Business docs from relevant sections (#6004)
@davidfischer: Note RTD for Biz requires SSL for custom domains (#6003)
@davidfischer: Allow searching in the Django Admin for gold (#6001)
@dojutsu-user: Fix logic involving creation of Sphinx Domains (#5997)
@dojutsu-user: Fix: no highlighting of matched keywords in search results (#5994)
@saadmk11: Do not copy external version artifacts twice (#5992)
@humitos: Missing list.extend line when appending conda dependencies (#5986)
@dojutsu-user: Use try…catch block with underscore.js template. (#5984)
@davidfischer: Cleanup project tags (#5983)
@ericholscher: Release 3.7.0 (#5982)
@dojutsu-user: Search Fix:
section_subtitle_link
is not defined (#5980)@pyup-bot: pyup: Scheduled weekly dependency update for week 29 (#5975)
@davidfischer: Community only ads for more themes (#5973)
@humitos: Append core requirements to Conda environment file (#5956)
Version 3.7.0
- Date:
July 23, 2019
@dojutsu-user: Search Fix:
section_subtitle_link
is not defined (#5980)@davidfischer: Community only ads for more themes (#5973)
@kittenking
: Fix typos across readthedocs.org repository (#5971)@dojutsu-user: Fix:
parse_json
also including html in titles (#5970)@saadmk11: update external version check for notification task (#5969)
@pranay414: Improve error message for invalid submodule URLs (#5957)
@humitos: Append core requirements to Conda environment file (#5956)
@Abhi-khandelwal: Exclude Spam projects count from total_projects count (#5955)
@ericholscher: Release 3.6.1 (#5953)
@ericholscher: Missed a couple places to set READTHEDOCS_LANGUAGE (#5951)
@dojutsu-user: Hotfix: Return empty dict when no highlight dict is present (#5950)
@humitos: Use a cwd where the user has access inside the container (#5949)
@ericholscher: Integrate indoc search into our prod docs (#5946)
@ericholscher: Explicitly delete SphinxDomain objects from previous versions (#5945)
@ericholscher: Properly return None when there’s no highlight on a hit. (#5944)
@ericholscher: Add READTHEDOCS_LANGUAGE to the environment during builds (#5941)
@ericholscher: Merge the GSOC 2019 in-doc search changes (#5919)
@saadmk11: Add check for external version in conf.py.tmpl for warning banner (#5900)
@Abhi-khandelwal: Point users to commercial solution for their private repositories (#5849)
@ericholscher: Merge initial work from Pull Request Builder GSOC (#5823)
Version 3.6.1
- Date:
July 17, 2019
@ericholscher: Missed a couple places to set READTHEDOCS_LANGUAGE (#5951)
@dojutsu-user: Hotfix: Return empty dict when no highlight dict is present (#5950)
@humitos: Use a cwd where the user has access inside the container (#5949)
@ericholscher: Explicitly delete SphinxDomain objects from previous versions (#5945)
@ericholscher: Properly return None when there’s no highlight on a hit. (#5944)
@ericholscher: Release 3.6.0 (#5943)
@ericholscher: Bump the Sphinx extension to 1.0 (#5942)
@ericholscher: Add READTHEDOCS_LANGUAGE to the environment during builds (#5941)
@dojutsu-user: Small search doc fix (#5940)
@dojutsu-user: Indexing speedup (#5939)
@dojutsu-user: Small improvement in parse_json (#5938)
@dojutsu-user: Use
attrgetter
in sorted function (#5936)@dojutsu-user: Fix spacing between the results and add highlight url param (#5932)
@ericholscher: Merge the GSOC 2019 in-doc search changes (#5919)
@dojutsu-user: Add tests for section-linking (#5918)
@humitos: APIv3 endpoint to manage Environment Variables (#5913)
@saadmk11: Add check for external version in conf.py.tmpl for warning banner (#5900)
@humitos: Update APIv3 documentation with latest changes (#5895)
Version 3.6.0
- Date:
July 16, 2019
@ericholscher: Bump the Sphinx extension to 1.0 (#5942)
@ericholscher: Add READTHEDOCS_LANGUAGE to the environment during builds (#5941)
@dojutsu-user: Small search doc fix (#5940)
@dojutsu-user: Indexing speedup (#5939)
@dojutsu-user: Small improvement in parse_json (#5938)
@dojutsu-user: Use
attrgetter
in sorted function (#5936)@dojutsu-user: Fix spacing between the results and add highlight url param (#5932)
@Abhi-khandelwal: remove the usage of six (#5930)
@dojutsu-user: Fix count value of docsearch REST api (#5926)
@ericholscher: Merge the GSOC 2019 in-doc search changes (#5919)
@dojutsu-user: Add tests for section-linking (#5918)
@humitos: APIv3 endpoint to manage Environment Variables (#5913)
@saadmk11: Add Feature Flag to Enable External Version Building (#5910)
@ericholscher: Pass the build_pk to the task instead of the build object itself (#5904)
@saadmk11: Exclude external versions from get_latest_build (#5901)
@humitos: Update APIv3 documentation with latest changes (#5895)
@stsewd: Add tests for version and project querysets (#5894)
@davidfischer: Rework on documentation guides (#5893)
@davidfischer: Fix spaces in email subject link (#5891)
@saadmk11: Build only HTML and Save external version artifacts in different directory (#5886)
@ericholscher: Add config to Build and Version admin (#5877)
@pyup-bot: pyup: Scheduled weekly dependency update for week 26 (#5874)
@pranay414: Change rtfd to readthedocs (#5871)
@saadmk11: Send Build Status Report Using GitHub Status API (#5865)
@dojutsu-user: Add section linking for the search result (#5829)
Version 3.5.3
- Date:
June 19, 2019
@davidfischer: Treat docs warnings as errors (#5825)
@davidfischer: Fix some unclear verbiage (#5820)
@davidfischer: Rework documentation index page (#5819)
@davidfischer: Upgrade intersphinx to Django 1.11 (#5818)
@pyup-bot: pyup: Scheduled weekly dependency update for week 24 (#5817)
@humitos: Disable changing domain when editing the object (#5816)
@saadmk11: Update docs with sitemap sort order change (#5815)
@davidfischer: Optimize requests to APIv3 (#5803)
@ericholscher: Show build length in the admin (#5802)
@ericholscher: A few small improvements to help with search admin stuff (#5800)
@humitos: Use a real SessionBase object on FooterNoSessionMiddleware (#5797)
@davidfischer: Mention security issue in the changelog (#5790)
@stsewd: Use querysets from the class not from an instance (#5783)
@saadmk11: Add Build managers and Update Build Querysets. (#5779)
@davidfischer: Project advertising page/form update (#5777)
@davidfischer: Update docs around opt-out of ads (#5776)
@dojutsu-user: [Design Doc] In Doc Search UI (#5707)
@humitos: Support single version subprojects URLs to serve from Django (#5690)
@agjohnson: Add a contrib Dockerfile for local build image on Linux (#4608)
Version 3.5.2
This is a quick hotfix to the previous version.
- Date:
June 11, 2019
@ericholscher: Fix version of our sphinx-ext we’re installing (#5789)
Version 3.5.1
This version contained a security fix for an open redirect issue. The problem has been fixed and deployed on readthedocs.org. For users who depend on the Read the Docs code line for a private instance of Read the Docs, you are encouraged to update to 3.5.1 as soon as possible.
- Date:
June 11, 2019
@saadmk11: Validate dict when parsing the mkdocs.yml file (#5775)
@davidfischer: Domain UI improvements (#5764)
@ericholscher: Try to fix Elastic connection pooling issues (#5763)
@pyup-bot: pyup: Scheduled weekly dependency update for week 22 (#5762)
@ericholscher: Try to fix Elastic connection pooling issues (#5760)
@davidfischer: Escape variables in mkdocs data (#5759)
@humitos: Serve 404/index.html file for htmldir Sphinx builder (#5754)
@wilvk: fix sphinx startup guide to not to fail on rtd build as per #2569 (#5753)
@agjohnson: Clarify latexmk option usage (#5745)
@ericholscher: Hotfix latexmx builder to ignore error codes (#5744)
@ericholscher: Hide the Code API search in the UX for now. (#5743)
@davidfischer: Add init.py under readthedocs/api (#5742)
@dojutsu-user: Fix design docs missing from toctree (#5741)
@ericholscher: Release 3.5.0 (#5740)
@davidfischer: Fix the sidebar ad color (#5731)
@humitos: Move version “Clean” button to details page (#5706)
@gorshunovr: Update flags documentation (#5701)
@davidfischer: Storage updates (#5698)
@davidfischer: Optimizations and UX improvements to the dashboard screen (#5637)
@chrisjsewell: Use
--upgrade
instead of--force-reinstall
for pip installs (#5635)@stsewd: Move file validations out of the config module (#5627)
@shivanshu1234: Add link to in-progress build from dashboard. (#5431)
Version 3.5.0
- Date:
May 30, 2019
@pyup-bot: pyup: Scheduled weekly dependency update for week 21 (#5737)
@humitos: Update feature flags exposed to user in docs (#5734)
@davidfischer: Fix the sidebar ad color (#5731)
@davidfischer: Create a funding file (#5729)
@davidfischer: Small commercial hosting page rework (#5728)
@mattparrilla: Add note about lack of support for private repos (#5726)
@cclauss: Identity is not the same thing as equality in Python (#5713)
@pyup-bot: pyup: Scheduled weekly dependency update for week 20 (#5712)
@humitos: Move version “Clean” button to details page (#5706)
@ericholscher: Explicitly mention a support email (#5703)
@davidfischer: Storage updates (#5698)
@pyup-bot: pyup: Scheduled weekly dependency update for week 19 (#5692)
@saadmk11: Warning about using sqlite 3.26.0 for development (#5681)
@davidfischer: Configure the security middleware (#5679)
@pyup-bot: pyup: Scheduled weekly dependency update for week 18 (#5667)
@saadmk11: pylint fix for notifications, restapi and config (#5664)
@humitos: Upgrade docker python package to latest release (#5654)
@pyup-bot: pyup: Scheduled weekly dependency update for week 17 (#5645)
@saadmk11: Sitemap hreflang syntax invalid for regional language variants fix (#5638)
@davidfischer: Optimizations and UX improvements to the dashboard screen (#5637)
@davidfischer: Redirect project slugs with underscores (#5634)
@saadmk11: Standardizing the use of settings directly (#5632)
@saadmk11: Note for Docker image size in Docker instructions (#5630)
@davidfischer: UX improvements around SSL certificates (#5629)
@davidfischer: Gold project sponsorship changes (#5628)
@davidfischer: Make sure there’s a contact when opting out of advertising (#5626)
@dojutsu-user: hotfix: correct way of getting environment variables (#5622)
@pyup-bot: pyup: Scheduled weekly dependency update for week 16 (#5619)
@ericholscher: Release 3.4.2 (#5613)
@ericholscher: Add explicit egg version to unicode-slugify (#5612)
@dojutsu-user: Remove ProxyMiddleware (#5607)
@dojutsu-user: Remove ‘Versions’ tab from Admin Dashboard. (#5600)
@dojutsu-user: Notify the user when deleting a superproject (#5596)
@saadmk11: Handle 401, 403 and 404 when setting up webhooks (#5589)
@saadmk11: Unify usage of settings and remove the usage of getattr for settings (#5588)
@saadmk11: Validate docs dir before writing custom js (#5569)
@shivanshu1234: Specify python3 in installation instructions. (#5552)
@davidfischer: Write build artifacts to (cloud) storage from build servers (#5549)
@saadmk11: “Default branch: latest” does not exist Fix. (#5547)
@dojutsu-user: Update
readthedocs-environment.json
file when env vars are added/deleted (#5540)@ericholscher: Add search for DomainData objects (#5290)
@gorshunovr: Change version references to :latest tag (#5245)
@dojutsu-user: Fix buttons problems in ‘Change Email’ section. (#5219)
Version 3.4.2
- Date:
April 22, 2019
@ericholscher: Add explicit egg version to unicode-slugify (#5612)
@saadmk11: Update Environmental Variable character limit (#5597)
@davidfischer: Add meta descriptions to top documentation (#5593)
@pyup-bot: pyup: Scheduled weekly dependency update for week 14 (#5580)
@davidfischer: Fix for Firefox to close the ad correctly (#5571)
@davidfischer: Non mobile fixed footer ads (#5567)
@ericholscher: Release 3.4.1 (#5566)
@dojutsu-user: Update
readthedocs-environment.json
file when env vars are added/deleted (#5540)@saadmk11: Sitemap assumes that all versions are translated Fix. (#5535)
@saadmk11: Remove Header Login button from login page (#5534)
@davidfischer: Optimize database performance of the footer API (#5530)
@stsewd: Don’t depend of enabled pdf/epub to show downloads (#5502)
@saadmk11: Don’t allow to create subprojects with same alias (#5404)
@saadmk11: Improve project translation listing Design under admin tab (#5380)
Version 3.4.1
- Date:
April 03, 2019
@pyup-bot: pyup: Scheduled weekly dependency update for week 13 (#5558)
@pyup-bot: pyup: Scheduled weekly dependency update for week 12 (#5536)
@saadmk11: Sitemap assumes that all versions are translated Fix. (#5535)
@saadmk11: Remove Header Login button from login page (#5534)
@stevepiercy: Add pylons-sphinx-themes to list of supported themes (#5533)
@davidfischer: Optimize database performance of the footer API (#5530)
@davidjb: Update contributing docs for RTD’s own docs (#5522)
@davidfischer: Guide users to the YAML config from the build detail page (#5519)
@stsewd: Link to the docdir of the remote repo in non-rtd themes for mkdocs (#5513)
@stevepiercy: Tidy up grammar, promote Unicode characters (#5511)
@stsewd: Catch specific exception for config not found (#5510)
@dojutsu-user: Use ValueError instead of InvalidParamsException (#5509)
@stsewd: Don’t depend of enabled pdf/epub to show downloads (#5502)
@ericholscher: Remove search & API from robots.txt (#5501)
@rshrc: Added note warning about using sqlite 3.26.0 in development (#5491)
@ericholscher: Fix bug that caused search objects not to delete (#5487)
@ericholscher: Release 3.4.0 (#5486)
@davidfischer: Promote the YAML config (#5485)
@pyup-bot: pyup: Scheduled weekly dependency update for week 11 (#5483)
@davidfischer: Enable Django Debug Toolbar in development (#5464)
@davidfischer: Optimize the version list screen (#5460)
@dojutsu-user: Remove asserts from code. (#5452)
@davidfischer: Optimize the repos API query (#5451)
@stsewd: Always update the commit of the stable version (#5421)
@orlnub123: Fix pip installs (#5386)
@davidfischer: Add an application form for community ads (#5379)
Version 3.4.0
- Date:
March 18, 2019
@davidfischer: Promote the YAML config (#5485)
@davidfischer: Enable Django Debug Toolbar in development (#5464)
@davidfischer: Optimize the version list screen (#5460)
@dojutsu-user: Update links to point to
stable
version. (#5455)@dojutsu-user: Fix inconsistency in footer links (#5454)
@davidfischer: Optimize the repos API query (#5451)
@pyup-bot: pyup: Scheduled weekly dependency update for week 10 (#5432)
@shivanshu1234: Remove invalid example from v2.rst (#5430)
@stsewd: Always update the commit of the stable version (#5421)
@agarwalrounak: Document that people can create a version named stable (#5417)
@agarwalrounak: Update installation guide to include submodules (#5416)
@humitos: Communicate the project slug can be changed by requesting it (#5403)
@pyup-bot: pyup: Scheduled weekly dependency update for week 09 (#5395)
@dojutsu-user: Trigger build on default branch when saving a project (#5393)
@orlnub123: Fix pip installs (#5386)
@ericholscher: Be extra explicit about the CNAME (#5382)
@ericholscher: Release 3.3.1 (#5376)
@ericholscher: Add a GSOC section for openAPI (#5375)
@dojutsu-user: Make ‘default_version` field as readonly if no active versions are found. (#5374)
@ericholscher: Be more defensive with our storage uploading (#5371)
@ericholscher: Check for two paths for each file (#5370)
@ericholscher: Don’t show projects in Sphinx Domain Admin sidebar (#5367)
@davidfischer: Remove the v1 API (#5293)
Version 3.3.1
- Date:
February 28, 2019
@ericholscher: Be more defensive with our storage uploading (#5371)
@ericholscher: Check for two paths for each file (#5370)
@ericholscher: Don’t show projects in Sphinx Domain Admin sidebar (#5367)
@ericholscher: Fix sphinx domain models and migrations (#5363)
@ericholscher: Release 3.3.0 (#5361)
@ericholscher: Fix search bug when an empty list of objects_id was passed (#5357)
@dojutsu-user: Add admin methods for reindexing versions from project and version admin. (#5343)
@stsewd: Cleanup a little of documentation_type from footer (#5315)
@ericholscher: Add modeling for intersphinx data (#5289)
@ericholscher: Revert “Merge pull request #4636 from readthedocs/search_upgrade” (#4716)
@safwanrahman: [GSoC 2018] All Search Improvements (#4636)
@stsewd: Add schema for configuration file with yamale (#4084)
Version 3.3.0
- Date:
February 27, 2019
@ericholscher: Fix search bug when an empty list of objects_id was passed (#5357)
@agjohnson: Update UI translations (#5354)
@ericholscher: Update GSOC page to mention we’re accepted. (#5353)
@pyup-bot: pyup: Scheduled weekly dependency update for week 08 (#5352)
@dojutsu-user: Increase path’s max_length for ImportedFile model to 4096 (#5345)
@dojutsu-user: Add admin methods for reindexing versions from project and version admin. (#5343)
@dojutsu-user: Remove deprecated code (#5341)
@stsewd: Don’t depend on specific data when catching exception (#5326)
@regisb: Fix “clean_builds” command argument parsing (#5320)
@stsewd: Cleanup a little of documentation_type from footer (#5315)
@humitos: Warning note about running ES locally for tests (#5314)
@humitos: Update documentation on running test for python environment (#5313)
@ericholscher: Release 3.2.3 (#5312)
@ericholscher: Add basic auth to the generic webhook API. (#5311)
@ericholscher: Fix an issue where we were not properly filtering projects (#5309)
@rexzing: Incompatible dependency for prospector with pylint-django (#5306)
@davidfischer: Allow extensions to control URL structure (#5296)
@ericholscher: Add modeling for intersphinx data (#5289)
@saadmk11: #4036 Updated build list to include an alert state (#5222)
@humitos: Use unicode-slugify to generate Version.slug (#5186)
@dojutsu-user: Add admin functions for wiping a version (#5140)
@davidfischer: Store ePubs and PDFs in media storage (#4947)
@ericholscher: Revert “Merge pull request #4636 from readthedocs/search_upgrade” (#4716)
@safwanrahman: [GSoC 2018] All Search Improvements (#4636)
Version 3.2.3
- Date:
February 19, 2019
@ericholscher: Add basic auth to the generic webhook API. (#5311)
@ericholscher: Fix an issue where we were not properly filtering projects (#5309)
@rexzing: Incompatible dependency for prospector with pylint-django (#5306)
@pyup-bot: pyup: Scheduled weekly dependency update for week 07 (#5305)
@davidfischer: Allow extensions to control URL structure (#5296)
Version 3.2.2
- Date:
February 13, 2019
@ericholscher: Support old jquery where responseJSON doesn’t exist (#5285)
@dojutsu-user: Fix error of travis (rename migration file) (#5282)
@discdiver: clarify github integration needs
https://
prepended (#5273)@davidfischer: Add note about security issue (#5263)
@ericholscher: Don’t delay search delete on project delete (#5262)
@agjohnson: Automate docs version from our setup.cfg (#5259)
@agjohnson: Add admin actions for building versions (#5255)
@ericholscher: Give the 404 page a title. (#5252)
@humitos: Make our SUFFIX default selection py2/3 compatible (#5251)
@ericholscher: Release 3.2.1 (#5248)
@ericholscher: Remove excluding files on search. (#5246)
@gorshunovr: Change version references to :latest tag (#5245)
@stsewd: Allow to override trigger_build from demo project (#5236)
@ericholscher: Change some info logging to debug to clean up build output (#5233)
@EJEP: Clarify ‘more info’ link in admin settings page (#5180)
Version 3.2.1
- Date:
February 07, 2019
@ericholscher: Remove excluding files on search. (#5246)
@ericholscher: Don’t update search on HTMLFile save (#5244)
@ericholscher: Be more defensive in our 404 handler (#5243)
@humitos: Install sphinx-notfound-page for building 404.html custom page (#5242)
@ericholscher: Release 3.2.0 (#5240)
Version 3.2.0
- Date:
February 06, 2019
@ericholscher: Support passing an explicit
index_name
for search indexing (#5239)@davidfischer: Tweak some ad styles (#5237)
@ericholscher: Update our GSOC page for 2019 (#5210)
@humitos: Do not allow to merge ‘Status: blocked’ PRs (#5205)
@ericholscher: Remove approvals requirement from mergeable (#5200)
@agjohnson: Update project notification copy to past tense (#5199)
@ericholscher: Refactor search code (#5197)
@dojutsu-user: Change badge style (#5189)
@humitos: Upgrade all packages removing py2 compatibility (#5179)
@dojutsu-user: Small docs fix (#5176)
@stsewd: Sync all services even if one social accoun fails (#5171)
@ericholscher: Release 3.1.0 (#5170)
@stsewd: Remove logic for guessing slug from an unregistered domain (#5143)
@dojutsu-user: Docs for feature flag (#5043)
@stsewd: Remove usage of project.documentation_type in tasks (#4896)
@ericholscher: Reapply the Elastic Search upgrade to
master
(#4722)
Version 3.1.0
This version greatly improves our search capabilities, thanks to the Google Summer of Code. We’re hoping to have another version of search coming soon after this, but this is a large upgrade moving to the latest Elastic Search.
- Date:
January 24, 2019
@ericholscher: Fix docs build (#5164)
@ericholscher: Release 3.0.0 (#5163)
@dojutsu-user: Sort versions smartly everywhere (#5157)
@dojutsu-user: Implement get objects or log (#4900)
@stsewd: Remove usage of project.documentation_type in tasks (#4896)
@ericholscher: Reapply the Elastic Search upgrade to
master
(#4722)
Version 3.0.0
Read the Docs now only supports Python 3.6+. This is for people running the software on their own servers, builds continue to work across all supported Python versions.
- Date:
January 23, 2019
@dojutsu-user: Sort versions smartly everywhere (#5157)
@ericholscher: Fix Sphinx conf.py inserts (#5150)
@ericholscher: Upgrade recommonmark to latest and fix integration (#5146)
@ericholscher: Fix local-docs-build requirements (#5136)
@humitos: Configuration file for ProBot Mergeable Bot (#5132)
@xavfernandez: docs: fix integration typos (#5128)
@Hamdy722
: Update LICENSE (#5125)@humitos: Validate mkdocs.yml config on values that we manipulate (#5119)
@ericholscher: Check that the repo exists before trying to get a git commit (#5115)
@ericholscher: Release 2.8.5 (#5111)
@stsewd: Use the python path from virtualenv in Conda (#5110)
@humitos: Feature flag to use
readthedocs/build:testing
image (#5109)@stsewd: Use python from virtualenv’s bin directory when executing commands (#5107)
@dojutsu-user: Split requirements/pip.txt (#5100)
@humitos: Do not list banned projects under /projects/ (#5097)
@davidfischer: Fire a signal for domain verification (eg. for SSL) (#5071)
@dojutsu-user: Use default settings for Config object (#5056)
@agjohnson: Allow large form posts via multipart encoded forms to command API (#5000)
@dojutsu-user: Validate url from webhook notification (#4983)
@dojutsu-user: Display error, using inbuilt notification system, if primary email is not verified (#4964)
@dojutsu-user: Implement get objects or log (#4900)
@humitos: CRUD for EnvironmentVariables from Project’s admin (#4899)
@stsewd: Remove usage of project.documentation_type in tasks (#4896)
@dojutsu-user: Fix the failing domain deletion task (#4891)
@humitos: Appropriate logging when a LockTimeout for VCS is reached (#4804)
@bansalnitish: Added a link to open new issue with prefilled details (#3683)
Version 2.8.5
- Date:
January 15, 2019
@stsewd: Use the python path from virtualenv in Conda (#5110)
@humitos: Feature flag to use
readthedocs/build:testing
image (#5109)@stsewd: Use python from virtualenv’s bin directory when executing commands (#5107)
@agjohnson: Fix common pieces (#5095)
@rainwoodman: Suppress progress bar of the conda command. (#5094)
@humitos: Remove unused suggestion block from 404 pages (#5087)
@humitos: Remove header nav (Login/Logout button) on 404 pages (#5085)
@agjohnson: Split up deprecated view notification to GitHub and other webhook endpoints (#5083)
@davidfischer: Fire a signal for domain verification (eg. for SSL) (#5071)
@agjohnson: Update copy on notifications for github services deprecation (#5067)
@dojutsu-user: Reduce logging to sentry (#5054)
@discdiver: fixed missing apostrophe for possessive “project’s” (#5052)
@dojutsu-user: Template improvements in “gold/subscription_form.html” (#5049)
@stephenfin: Add temporary method for disabling shallow cloning (#5031) (#5036)
@dojutsu-user: Change default_branch value from Version.slug to Version.identifier (#5034)
@humitos: Convert an IRI path to URI before setting as NGINX header (#5024)
@safwanrahman: index project asynchronously (#5023)
@ericholscher: Release 2.8.4 (#5011)
@davidfischer: Tweak sidebar ad priority (#5005)
@stsewd: Replace git status and git submodules status for gitpython (#5002)
@davidfischer: Backport jquery 2432 to Read the Docs (#5001)
@dojutsu-user: Make $ unselectable in docs (#4990)
@dojutsu-user: Remove deprecated “models.permalink” (#4975)
@dojutsu-user: Add validation for tags of length greater than 100 characters (#4967)
@dojutsu-user: Add test case for send_notifications on VersionLockedError (#4958)
@dojutsu-user: Remove trailing slashes on svn checkout (#4951)
@humitos: CRUD for EnvironmentVariables from Project’s admin (#4899)
@humitos: Notify users about the usage of deprecated webhooks (#4898)
@dojutsu-user: Disable django guardian warning (#4892)
@humitos: Handle 401, 403 and 404 status codes when hitting GitHub for webhook (#4805)
Version 2.8.4
- Date:
December 17, 2018
@davidfischer: Tweak sidebar ad priority (#5005)
@davidfischer: Backport jquery 2432 to Read the Docs (#5001)
@ericholscher: Remove codecov comments and project coverage CI status (#4996)
@dojutsu-user: Link update on FAQ page (#4988)
@ericholscher: Only use remote branches for our syncing. (#4984)
@humitos: Sanitize output and chunk it at DATA_UPLOAD_MAX_MEMORY_SIZE (#4982)
@humitos: Modify DB field for container_time_limit to be an integer (#4979)
@dojutsu-user: Remove deprecated imports from “urlresolvers” (#4976)
@davidfischer: Workaround for a django-storages bug (#4963)
@ericholscher: Release 2.8.3 (#4961)
@dojutsu-user: Validate profile form fields (#4910)
@davidfischer: Calculate actual ad views (#4885)
@humitos: Allow all /api/v2/ CORS if the Domain is known (#4880)
@dojutsu-user: Disable django.security.DisallowedHost from logging (#4879)
@dojutsu-user: Remove ‘Sphinx Template Changes’ From Docs (#4878)
@dojutsu-user: Make form for adopting project a choice field (#4841)
@dojutsu-user: Add ‘Branding’ under the ‘Business Info’ section and ‘Guidelines’ on ‘Design Docs’ (#4830)
@dojutsu-user: Raise 404 at SubdomainMiddleware if the project does not exist. (#4795)
@dojutsu-user: Add help_text in the form for adopting a project (#4781)
@dojutsu-user: Remove /embed API endpoint (#4771)
@dojutsu-user: Improve unexpected error message when build fails (#4754)
@dojutsu-user: Change the way of using login_required decorator (#4723)
@dojutsu-user: Fix the form for adopting a project (#4721)
Version 2.8.3
- Date:
December 05, 2018
@humitos: Pin redis to the current stable and compatible version (#4956)
@ericholscher: Release 2.8.2 (#4931)
@dojutsu-user: Validate profile form fields (#4910)
@davidfischer: Calculate actual ad views (#4885)
@stsewd: Sync versions when creating/deleting versions (#4876)
@dojutsu-user: Remove unused project model fields (#4870)
Version 2.8.2
- Date:
November 28, 2018
@safwanrahman: Tuning Elasticsearch for search improvements (#4909)
@edmondchuc: Fixed some typos. (#4906)
@humitos: Upgrade stripe Python package to the latest version (#4904)
@humitos: Retry on API failure when connecting from builders (#4902)
@humitos: Expose environment variables from database into build commands (#4894)
@ericholscher: Use python to expand the cwd instead of environment variables (#4882)
@dojutsu-user: Disable django.security.DisallowedHost from logging (#4879)
@dojutsu-user: Remove ‘Sphinx Template Changes’ From Docs (#4878)
@ericholscher: Unbreak the admin on ImportedFile by using raw_id_fields (#4874)
@stsewd: Check if latest exists before updating identifier (#4873)
@ericholscher: Release 2.8.1 (#4872)
@dojutsu-user: Update django-guardian settings (#4871)
@dojutsu-user: Change ‘VerisionLockedTimeout’ to ‘VersionLockedError’ in comment. (#4859)
@humitos: Appropriate logging when a LockTimeout for VCS is reached (#4804)
@stsewd: Remove support for multiple configurations in one file (#4800)
@invinciblycool: Redirect to build detail post manual build (#4622)
@davidfischer: Enable timezone support and set timezone to UTC (#4545)
@chirathr: Webhook notification URL size validation check (#3680)
Version 2.8.1
- Date:
November 06, 2018
@ericholscher: Fix migration name on modified date migration (#4867)
@dojutsu-user: Change ‘VerisionLockedTimeout’ to ‘VersionLockedError’ in comment. (#4859)
@ericholscher: Shorten project name to match slug length (#4856)
@stsewd: Generic message for parser error of config file (#4853)
@ericholscher: Add modified_date to ImportedFile. (#4850)
@ericholscher: Use raw_id_fields so that the Feature admin loads (#4849)
@benjaoming: Version compare warning text (#4842)
@dojutsu-user: Make form for adopting project a choice field (#4841)
@humitos: Do not send notification on VersionLockedError (#4839)
@ericholscher: Add all migrations that are missing from model changes (#4837)
@ericholscher: Add docstring to DrfJsonSerializer so we know why it’s there (#4836)
@ericholscher: Show the project’s slug in the dashboard (#4834)
@ericholscher: Allow filtering builds by commit. (#4831)
@dojutsu-user: Add ‘Branding’ under the ‘Business Info’ section and ‘Guidelines’ on ‘Design Docs’ (#4830)
@davidfischer: Migrate old passwords without “set_unusable_password” (#4829)
@humitos: Do not import the Celery worker when running the Django app (#4824)
@invinciblycool: Add MkDocsYAMLParseError (#4814)
@humitos: Feature flag to make
readthedocs
theme default on MkDocs docs (#4802)@ericholscher: Allow use of
file://
urls in repos during development. (#4801)@ericholscher: Release 2.7.2 (#4796)
@dojutsu-user: Raise 404 at SubdomainMiddleware if the project does not exist. (#4795)
@dojutsu-user: Add help_text in the form for adopting a project (#4781)
@sriks123: Remove logic around finding config file inside directories (#4755)
@dojutsu-user: Improve unexpected error message when build fails (#4754)
@stsewd: Don’t build latest on webhook if it is deactivated (#4733)
@dojutsu-user: Change the way of using login_required decorator (#4723)
@invinciblycool: Remove unused views and their translations. (#4632)
@invinciblycool: Redirect to build detail post manual build (#4622)
@anubhavsinha98: Issue #4551 Changed mock docks to use sphinx (#4569)
@Alig1493: Fix for issue #4092: Remove unused field from Project model (#4431)
@xrmx: make it easier to use a different default theme (#4278)
@humitos: Document alternate domains for business site (#4271)
@xrmx: restapi/client: don’t use DRF parser for parsing (#4160)
@julienmalard: New languages (#3759)
@Alig1493: [Fixed #872] Filter Builds according to commit (#3544)
Version 2.8.0
- Date:
October 30, 2018
Major change is an upgrade to Django 1.11.
@humitos: Feature flag to make
readthedocs
theme default on MkDocs docs (#4802)@humitos: Pin missing dependency for the MkDocs guide compatibility (#4798)
@ericholscher: Release 2.7.2 (#4796)
@humitos: Do not log as error a webhook with an invalid branch name (#4779)
@ericholscher: Run travis on release branches (#4763)
@ericholscher: Remove Eric & Anthony from ADMINS & MANAGERS settings (#4762)
@davidfischer: Django 1.11 upgrade (#4750)
@stsewd: Remove hardcoded constant from config module (#4704)
Version 2.7.2
- Date:
October 23, 2018
@humitos: Validate the slug generated is valid before importing a project (#4780)
@humitos: Do not log as error a webhook with an invalid branch name (#4779)
@ericholscher: Add an index page to our design docs. (#4775)
@dojutsu-user: Remove /embed API endpoint (#4771)
@ericholscher: Remove Eric & Anthony from ADMINS & MANAGERS settings (#4762)
@humitos: Do not re-raise the exception if the one that we are checking (#4761)
@humitos: Do not fail when unlinking an non-existing path (#4760)
@humitos: Allow to extend the DomainForm from outside (#4752)
@davidfischer: Fixes an OSX issue with the test suite (#4748)
@davidfischer: Make storage syncers extend from a base class (#4742)
@ericholscher: Revert “Upgrade theme media to 0.4.2” (#4735)
@ericholscher: Upgrade theme media to 0.4.2 (#4734)
@stsewd: Extend install option from config file (v2, schema only) (#4732)
@ericholscher: Fix get_vcs_repo by moving it to the Mixin (#4727)
@humitos: Guide explaining how to keep compatibility with mkdocs (#4726)
@ericholscher: Release 2.7.1 (#4725)
@dojutsu-user: Fix the form for adopting a project (#4721)
@ericholscher: Remove logging verbosity on syncer failure (#4717)
@davidfischer: Improve the getting started docs (#4676)
@stsewd: Strict validation in configuration file (v2 only) (#4607)
Version 2.7.1
- Date:
October 04, 2018
@ericholscher: Revert “Merge pull request #4636 from readthedocs/search_upgrade” (#4716)
@ericholscher: Reduce the logging we do on CNAME 404 (#4715)
@davidfischer: Minor redirect admin improvements (#4709)
@humitos: Define the doc_search reverse URL from inside the __init__ on test (#4703)
@ericholscher: Revert “auto refresh false” (#4701)
@browniebroke: Remove unused package nilsimsa (#4697)
@safwanrahman: Tuning elasticsearch shard and replica (#4689)
@ericholscher: Fix bug where we were not indexing Sphinx HTMLDir projects (#4685)
@ericholscher: Fix the queryset used in chunking (#4683)
@ericholscher: Fix python 2 syntax for getting first key in search index update (#4682)
@ericholscher: Release 2.7.0 (#4681)
@davidfischer: Increase footer ad text size (#4678)
@davidfischer: Fix broken docs links (#4677)
@ericholscher: Remove search autosync from tests so local tests work (#4675)
@davidfischer: Ad customization docs (#4659)
@davidfischer: Fix a typo in the privacy policy (#4658)
@davidfischer: Create an explicit ad placement (#4647)
@agjohnson: Use collectstatic on
media/
, without collecting user files (#4502)@agjohnson: Add docs on our roadmap process (#4469)
@humitos: Send notifications when generic/unhandled failures (#3864)
Version 2.7.0
- Date:
September 29, 2018
Reverted, do not use
Version 2.6.6
- Date:
September 25, 2018
@davidfischer: Fix a markdown test error (#4663)
@davidfischer: Ad customization docs (#4659)
@davidfischer: Fix a typo in the privacy policy (#4658)
@agjohnson: Put search step back into project build task (#4655)
@davidfischer: Create an explicit ad placement (#4647)
@safwanrahman: [Fix #4247] deleting old search code (#4635)
@davidfischer: Make ads more obvious that they are ads (#4628)
@agjohnson: Change mentions of “CNAME” -> custom domain (#4627)
@invinciblycool: Use validate_dict for more accurate error messages (#4617)
@safwanrahman: fixing the indexing (#4615)
@agjohnson: Add cwd to subprocess calls (#4611)
@agjohnson: Make restapi URL additions conditional (#4609)
@agjohnson: Ability to use supervisor from python 2.7 and still run Python 3 (#4606)
@humitos: Return 404 for inactive versions and allow redirects on them (#4599)
@davidfischer: Fixes an issue with duplicate gold subscriptions (#4597)
@davidfischer: Fix ad block nag project issue (#4596)
@humitos: Run all our tests with Python 3.6 on Travis (#4592)
@humitos: Sanitize command output when running under DockerBuildEnvironment (#4591)
@humitos: Force resolver to use PUBLIC_DOMAIN over HTTPS if not Domain.https (#4579)
@davidfischer: Updates and simplification for mkdocs (#4556)
@humitos: Docs for hiding “On …” section from versions menu (#4547)
@safwanrahman: [Fix #4268] Adding Documentation for upgraded Search (#4467)
@humitos: Clean CC sensible data on Gold subscriptions (#4291)
@stsewd: Update docs to match the new triague guidelines (#4260)
@xrmx: Make the STABLE and LATEST constants overridable (#4099)
Version 2.6.5
- Date:
August 29, 2018
@agjohnson: Respect user language when caching homepage (#4585)
@humitos: Add start and termination to YAML file regex (#4584)
@safwanrahman: [Fix #4576] Do not delete projects which have multiple users (#4577)
Version 2.6.4
- Date:
August 29, 2018
@davidfischer: Add a flag to disable docsearch (#4570)
@davidfischer: Add a note about specifying the version of build tools (#4562)
@davidfischer: Serve badges directly from local filesystem (#4561)
@humitos: Sanitize BuildCommand.output by removing NULL characters (#4552)
@davidfischer: Fix changelog for 2.6.3 (#4548)
@ericholscher: Remove hiredis (#4542)
@davidfischer: Use the STATIC_URL for static files to avoid redirection (#4522)
@StefanoChiodino: Allow for period as a prefix and yaml extension for config file (#4512)
@AumitLeon: Update information on mkdocs build process (#4508)
@humitos: Fix Exact Redirect to work properly when using $rest keyword (#4501)
@humitos: Mark some BuildEnvironmentError exceptions as Warning and do not log them (#4495)
@xrmx: projects: don’t explode trying to update UpdateDocsTaskStep state (#4485)
@humitos: Note with the developer flow to update our app translations (#4481)
@humitos: Add
trimmed
to all multilinesblocktrans
tags (#4480)@humitos: Example and note with usage of trimmed option in blocktrans (#4479)
@humitos: Update Transifex resources for our documentation (#4478)
@stsewd: Port https://github.com/readthedocs/readthedocs-build/pull/38/ (#4461)
@humitos: Skip tags that point to blob objects instead of commits (#4442)
@stsewd: Document python.use_system_site_packages option (#4422)
@humitos: More tips about how to reduce resources usage (#4419)
@xrmx: projects: user in ProjectQuerySetBase.for_admin_user is mandatory (#4417)
Version 2.6.3
- Date:
August 18, 2018
Release to Azure!
@davidfischer: Add Sponsors list to footer (#4424)
@xrmx: templates: mark missing string for translation on project edit (#4518)
@ericholscher: Performance improvement: cache version listing on the homepage (#4526)
@agjohnson: Remove mailgun from our dependencies (#4531)
@davidfischer: Improved ad block detection (#4532)
@agjohnson: Revert “Remove SelectiveFileSystemFolder finder workaround” (#4533)
@davidfischer: Slight clarification on turning off ads for a project (#4534)
@davidfischer: Fix the sponsor image paths (#4535)
@agjohnson: Update build assets (#4537)
Version 2.6.2
- Date:
August 14, 2018
@davidfischer: Custom domain clarifications (#4514)
@davidfischer: Support ads on pallets themes (#4499)
@davidfischer: Only use HostHeaderSSLAdapter for SSL/HTTPS connections (#4498)
@keflavich: Very minor English correction (#4497)
@davidfischer: All static media is run through “collectstatic” (#4489)
@nijel: Document expected delay on CNAME change and need for CAA (#4487)
@davidfischer: Allow enforcing HTTPS for custom domains (#4483)
@davidfischer: Add some details around community ad qualifications (#4436)
@davidfischer: Updates to manifest storage (#4430)
@davidfischer: Update alt domains docs with SSL (#4425)
@agjohnson: Add SNI support for API HTTPS endpoint (#4423)
@davidfischer: API v1 cleanup (#4415)
@davidfischer: Allow filtering versions by active (#4414)
@safwanrahman: [Fix #4407] Port Project Search for Elasticsearch 6.x (#4408)
@davidfischer: Add client ID to Google Analytics requests (#4404)
@xrmx: projects: fix filtering in projects_tag_detail (#4398)
@davidfischer: Fix a proxy model bug related to ad-free (#4390)
@davidfischer: Do not access database from builds to check ad-free (#4387)
@stsewd: Set full
source_file
path for default configuration (#4379)@humitos: Make
get_version
usable from a specified path (#4376)@humitos: Check for ‘options’ in update_repos command (#4373)
@safwanrahman: [Fix #4333] Implement asynchronous search reindex functionality using celery (#4368)
@davidfischer: Remove the UID from the GA measurement protocol (#4347)
@agjohnson: Show subprojects in search results (#1866)
Version 2.6.1
- Date:
July 17, 2018
Version 2.6.0
- Date:
July 16, 2018
Adds initial support for HTTPS on custom domains
@stsewd: Revert “projects: serve badge with same protocol as site” (#4353)
@davidfischer: Do not overwrite sphinx context variables feature (#4349)
@stsewd: Calrify docs about how rtd select the stable version (#4348)
@davidfischer: Remove the UID from the GA measurement protocol (#4347)
@davidfischer: Improvements for the build/version admin (#4344)
@safwanrahman: [Fix #4265] Porting frontend docsearch to work with new API (#4340)
@davidfischer: Warning about theme context implementation status (#4335)
@davidfischer: Disable the ad block nag for ad-free projects (#4329)
@safwanrahman: [fix #4265] Port Document search API for Elasticsearch 6.x (#4309)
@stsewd: Refactor configuration object to class based (#4298)
Version 2.5.3
- Date:
July 05, 2018
@davidfischer: Add a flag for marking a project ad-free (#4313)
@davidfischer: Use “npm run lint” from tox (#4312)
@davidfischer: Fix issues building static assets (#4311)
@safwanrahman: [Fix #2457] Implement exact match search (#4292)
@davidfischer: API filtering improvements (#4285)
@annegentle: Remove self-referencing links for webhooks docs (#4283)
@safwanrahman: [Fix #2328 #2013] Refresh search index and test for case insensitive search (#4277)
@xrmx: doc_builder: clarify sphinx backend append_conf docstring (#4276)
@davidfischer: Add documentation for APIv2 (#4274)
@ericholscher: Fix our use of
--use-wheel
in pip. (#4269)@agjohnson: Revert “Merge pull request #4206 from FlorianKuckelkorn/fix/pip-breaking-change” (#4261)
@humitos: Fix triggering a build for a skipped project (#4255)
@davidfischer: Allow staying logged in for longer (#4236)
@safwanrahman: Upgrade Elasticsearch to version 6.x (#4211)
Version 2.5.2
- Date:
June 18, 2018
@davidfischer: Add a page detailing ad blocking (#4244)
@xrmx: projects: serve badge with same protocol as site (#4228)
@FlorianKuckelkorn: Fixed breaking change in pip 10.0.0b1 (2018-03-31) (#4206)
@StefanoChiodino: Document that readthedocs file can now have yaml extension (#4129)
@humitos: Downgrade docker to 3.1.3 because of timeouts in EXEC call (#4241)
@humitos: Handle revoked oauth permissions by the user (#4074)
@humitos: Allow to hook the initial build from outside (#4033)
Version 2.5.1
- Date:
June 14, 2018
@stsewd: Add feature to build json with html in the same build (#4229)
@davidfischer: Prioritize ads based on content (#4224)
@mostaszewski: #4170 - Link the version in the footer to the changelog (#4217)
@SuriyaaKudoIsc: Use the latest YouTube share URL (#4209)
@davidfischer: Allow staff to trigger project builds (#4207)
@davidfischer: Use autosectionlabel in the privacy policy (#4204)
@davidfischer: These links weren’t correct after #3632 (#4203)
@davidfischer: Release 2.5.0 (#4200)
@ericholscher: Fix Build: Convert md to rst in docs (#4199)
@ericholscher: Updates to #3850 to fix merge conflict (#4198)
@ericholscher: Build on top of #3881 and put docs in custom_installs. (#4196)
@davidfischer: Increase the max theme version (#4195)
@ericholscher: Remove maxcdn reqs (#4194)
@ericholscher: Add missing gitignore item for ES testing (#4193)
@xrmx: locale: update and build the english translation (#4187)
@humitos: Upgrade celery to avoid AtributeError:async (#4185)
@stsewd: Prepare code for custo mkdocs.yaml location (#4184)
@agjohnson: Updates to our triage guidelines (#4180)
@davidfischer: Server side analytics (#4131)
@stsewd: Add schema for configuration file with yamale (#4084)
@davidfischer: Ad block nag to urge people to whitelist (#4037)
@benjaoming: Add Mexican Spanish as a project language (#3588)
Version 2.5.0
- Date:
June 06, 2018
@ericholscher: Fix Build: Convert md to rst in docs (#4199)
@ericholscher: Remove maxcdn reqs (#4194)
@ericholscher: Add missing gitignore item for ES testing (#4193)
@xrmx: locale: update and build the english translation (#4187)
@safwanrahman: Test for search functionality (#4116)
@davidfischer: Update mkdocs to the latest (#4041)
@davidfischer: Ad block nag to urge people to whitelist (#4037)
@davidfischer: Decouple the theme JS from readthedocs.org (#3968)
@fenilgandhi: Add support for different badge styles (#3632)
@benjaoming: Add Mexican Spanish as a project language (#3588)
@stsewd: Wrap versions’ list to look more consistent (#3445)
@agjohnson: Move CDN code to external abstraction (#2091)
Version 2.4.0
- Date:
May 31, 2018
This fixes assets that were generated against old dependencies in 2.3.14
@agjohnson: Fix issues with search javascript (#4176)
@davidfischer: Update the privacy policy date (#4171)
@davidfischer: Note about state and metro ad targeting (#4169)
@ericholscher: Add another guide around fixing memory usage. (#4168)
@stsewd: Add “edit” and “view docs” buttons to subproject list (#3572)
@kennethlarsen: Remove outline reset to bring back outline (#3512)
Version 2.3.14
- Date:
May 30, 2018
@ericholscher: Remove CSS override that doesn’t exist. (#4165)
@davidfischer: Include a DMCA request template (#4164)
@davidfischer: No CSRF cookie for docs pages (#4153)
@davidfischer: Small footer rework (#4150)
@ericholscher: Remove deploy directory which is unused. (#4147)
@davidfischer: Add Intercom to the privacy policy (#4145)
@davidfischer: Fix with Lato Bold font (#4138)
@davidfischer: Release 2.3.13 (#4137)
@davidfischer: Build static assets (#4136)
@xrmx: oauth/services: correct error handling in paginate (#4134)
@cedk: Use quiet mode to retrieve branches from mercurial (#4114)
@humitos: Add
has_valid_clone
andhas_valid_webhook
to ProjectAdminSerializer (#4107)@stsewd: Put the rtd extension to the beginning of the list (#4054)
@davidfischer: Do Not Track support (#4046)
@stsewd: Set urlconf to None after changing SUBDOMAIN setting (#4032)
@xrmx: templates: mark a few more strings for translations (#3869)
@ze: Make search bar in dashboard have a more clear message. (#3844)
@varunotelli: Pointed users to Python3.6 (#3817)
@ajatprabha: Ticket #3694: rename owners to maintainers (#3703)
@SanketDG: Refactor to replace old logging to avoid mangling (#3677)
@techtonik: Update Git on prod (#3615)
@cclauss: Modernize Python 2 code to get ready for Python 3 (#3514)
Version 2.3.13
- Date:
May 23, 2018
@davidfischer: Build static assets (#4136)
@davidfischer: Use the latest Lato release (#4093)
@davidfischer: Update Gold Member marketing (#4063)
@davidfischer: Fix missing fonts (#4060)
@stsewd: Additional validation when changing the project language (#3790)
Version 2.3.12
- Date:
May 21, 2018
@davidfischer: Display feature flags in the admin (#4108)
@humitos: Set valid clone in project instance inside the version object also (#4105)
@davidfischer: Use the latest theme version in the default builder (#4096)
@humitos: Use next field to redirect user when login is done by social (#4083)
@humitos: Update the
documentation_type
when it’s set to ‘auto’ (#4080)@brainwane: Update link to license in philosophy document (#4059)
@agjohnson: Update local assets for theme to 0.3.1 tag (#4047)
@davidfischer: Subdomains use HTTPS if settings specify (#3987)
@davidfischer: Draft Privacy Policy (#3978)
@humitos: Allow import Gitlab repo manually and set a webhook automatically (#3934)
@davidfischer: Enable ads on the readthedocs mkdocs theme (#3922)
@bansalnitish: Fixes #2953 - Url resolved with special characters (#3725)
Version 2.3.11
- Date:
May 01, 2018
@agjohnson: Update local assets for theme to 0.3.1 tag (#4047)
@davidfischer: Detail where ads are shown (#4031)
@ericholscher: Make email verification optional for dev (#4024)
@davidfischer: Support sign in and sign up with GH/GL/BB (#4022)
@agjohnson: Remove old varnish purge utility function (#4019)
@agjohnson: Remove build queue length warning on build list page (#4018)
@stsewd: Don’t check order on assertQuerysetEqual on tests for subprojects (#4016)
@davidfischer: MkDocs projects use RTD’s analytics privacy improvements (#4013)
@davidfischer: Remove typekit fonts (#3982)
@stsewd: Move dynamic-fixture to testing requirements (#3956)
Version 2.3.10
- Date:
April 24, 2018
Version 2.3.9
- Date:
April 20, 2018
@agjohnson: Fix recursion problem more generally (#3989)
Version 2.3.8
- Date:
April 20, 2018
@agjohnson: Give TaskStep class knowledge of the underlying task (#3983)
@humitos: Resolve domain when a project is a translation of itself (#3981)
Version 2.3.7
- Date:
April 19, 2018
@humitos: Fix server_error_500 path on single version (#3975)
@davidfischer: Fix bookmark app lint failures (#3969)
@humitos: Use latest setuptools (39.0.1) by default on build process (#3967)
@ericholscher: Fix exact redirects. (#3965)
@humitos: Make
resolve_domain
work when a project is subproject of itself (#3962)@humitos: Remove django-celery-beat and use the default scheduler (#3959)
@davidfischer: Add advertising details docs (#3955)
@davidfischer: Small change to Chinese language names (#3947)
@agjohnson: Don’t share state in build task (#3946)
@davidfischer: Fixed footer ad width fix (#3944)
@humitos: Allow extend Translation and Subproject form logic from corporate (#3937)
@humitos: Resync valid webhook for project manually imported (#3935)
@humitos: Mention RTD in the Project URL of the issue template (#3928)
@davidfischer: Correctly report mkdocs theme name (#3920)
@davidfischer: Show an adblock admonition in the dev console (#3894)
@xrmx: templates: mark a few more strings for translations (#3869)
@aasis21: Documentation for build notifications using webhooks. (#3671)
@mashrikt: [#2967] Scheduled tasks for cleaning up messages (#3604)
@marcelstoer: Doc builder template should check for mkdocs_page_input_path before using it (#3536)
Version 2.3.6
- Date:
April 05, 2018
@agjohnson: Drop readthedocs- prefix to submodule (#3916)
@agjohnson: This fixes two bugs apparent in nesting of translations in subprojects (#3909)
@humitos: Use a proper default for
docker
attribute on UpdateDocsTask (#3907)@davidfischer: Handle errors from publish_parts (#3905)
@agjohnson: Drop pdbpp from testing requirements (#3904)
@humitos: Save Docker image data in JSON file only for DockerBuildEnvironment (#3897)
@davidfischer: Single analytics file for all builders (#3896)
Version 2.3.5
- Date:
April 05, 2018
@agjohnson: Drop pdbpp from testing requirements (#3904)
@agjohnson: Resolve subproject correctly in the case of single version (#3901)
@davidfischer: Fixed footer ads again (#3895)
@davidfischer: Fix an Alabaster ad positioning issue (#3889)
@humitos: Save Docker image hash in RTD environment.json file (#3880)
@agjohnson: Add ref links for easier intersphinx on yaml config page (#3877)
@rajujha373: Typo correction in docs/features.rst (#3872)
@gaborbernat: add description for tox tasks (#3868)
@davidfischer: Another CORS hotfix for the sustainability API (#3862)
@agjohnson: Fix up some of the logic around repo and submodule URLs (#3860)
@davidfischer: Fix linting errors in tests (#3855)
@agjohnson: Use gitpython to find a commit reference (#3843)
@davidfischer: Remove pinned CSS Select version (#3813)
@davidfischer: Use JSONP for sustainability API (#3789)
@rajujha373: #3718: Added date to changelog (#3788)
@xrmx: tests: mock test_conf_file_not_found filesystem access (#3740)
Version 2.3.4
Release for static assets
Version 2.3.3
@davidfischer: Fix linting errors in tests (#3855)
@humitos: Update instance and model when
record_as_success
(#3831)@ericholscher: Reorder GSOC projects, and note priority order (#3823)
@agjohnson: Add temporary method for skipping submodule checkout (#3821)
@davidfischer: Remove pinned CSS Select version (#3813)
@humitos: Use readthedocs-common to share linting files across different repos (#3808)
@davidfischer: Use JSONP for sustainability API (#3789)
@humitos: Define useful celery beat task for development (#3762)
@humitos: Auto-generate conf.py compatible with Py2 and Py3 (#3745)
@humitos: Documentation for RTD context sent to the Sphinx theme (#3490)
Version 2.3.2
This version adds a hotfix branch that adds model validation to the repository URL to ensure strange URL patterns can’t be used.
Version 2.3.1
@humitos: Update instance and model when
record_as_success
(#3831)@agjohnson: Bump docker -> 3.1.3 (#3828)
@himanshutejwani12: Update index.rst (#3824)
@ericholscher: Reorder GSOC projects, and note priority order (#3823)
@agjohnson: Autolint cleanup for #3821 (#3822)
@agjohnson: Add temporary method for skipping submodule checkout (#3821)
@varunotelli: Update install.rst dropped the Python 2.7 only part (#3814)
@xrmx: Update machine field when activating a version from project_version_detail (#3797)
@humitos: Allow members of “Admin” Team to wipe version envs (#3791)
@ericholscher: Add sustainability api to CORS (#3782)
@durwasa-chakraborty: Fixed a grammatical error (#3780)
@humitos: Trying to solve the end line character for a font file (#3776)
@bansalnitish: Added eslint rules (#3768)
@davidfischer: Use sustainability api for advertising (#3747)
@davidfischer: Add a sustainability API (#3672)
@humitos: Upgrade django-pagination to a “maintained” fork (#3666)
@davidfischer: Anonymize IP addresses for Google Analytics (#3626)
@humitos: Upgrade docker-py to its latest version (docker==3.1.1) (#3243)
Version 2.3.0
Warning
Version 2.3.0 includes a security fix for project translations. See Release 2.3.0 for more information
@berkerpeksag: Fix indentation in docs/faq.rst (#3758)
@rajujha373: #3741: replaced Go Crazy text with Search (#3752)
@humitos: Log in the proper place and add the image name used (#3750)
@shubham76: Changed ‘Submit’ text on buttons with something more meaningful (#3749)
@agjohnson: Fix tests for Git submodule (#3737)
@bansalnitish: Add eslint rules and fix errors (#3726)
@davidfischer: Prevent bots indexing promos (#3719)
@agjohnson: Add argument to skip errorlist through knockout on common form (#3704)
@ajatprabha: Fixed #3701: added closing tag for div element (#3702)
@bansalnitish: Fixes internal reference (#3695)
@humitos: Always record the git branch command as success (#3693)
@ericholscher: Show the project slug in the project admin (to make it more explicit what project is what) (#3681)
@agjohnson: Hotfix for adding logging call back into project sync task (#3657)
@agjohnson: Fix issue with missing setting in oauth SyncRepo task (#3656)
@ericholscher: Remove error logging that isn’t an error. (#3650)
@aasis21: formatting buttons in edit project text editor (#3633)
@humitos: Filter by my own repositories at Import Remote Project (#3548)
@funkyHat: check for matching alias before subproject slug (#2787)
Version 2.2.1
Version 2.2.1
is a bug fix release for the several issues found in
production during the 2.2.0
release.
@agjohnson: Hotfix for adding logging call back into project sync task (#3657)
@agjohnson: Fix issue with missing setting in oauth SyncRepo task (#3656)
@humitos: Send proper context to celery email notification task (#3653)
@ericholscher: Remove error logging that isn’t an error. (#3650)
@davidfischer: Update RTD security docs (#3641)
@humitos: Ability to override the creation of the Celery App (#3623)
Version 2.2.0
@humitos: Send proper context to celery email notification task (#3653)
@davidfischer: Fix a 500 when searching for files with API v1 (#3645)
@davidfischer: Update RTD security docs (#3641)
@humitos: Fix SVN initialization for command logging (#3638)
@humitos: Ability to override the creation of the Celery App (#3623)
@mohitkyadav: Add venv to .gitignore (#3620)
@Angeles4four
: Grammar correction (#3596)@davidfischer: Fix an unclosed tag (#3592)
@davidfischer: Force a specific ad to be displayed (#3584)
@stsewd: Docs about preference for tags over branches (#3582)
@davidfischer: Rework homepage (#3579)
@stsewd: Don’t allow to create a subproject of a project itself (#3571)
@davidfischer: Fix for build screen in firefox (#3569)
@davidfischer: Analytics fixes (#3558)
@davidfischer: Upgrade requests version (#3557)
@ericholscher: Add a number of new ideas for GSOC (#3552)
@davidfischer: Send custom dimensions for mkdocs (#3550)
@davidfischer: Promo contrast improvements (#3549)
@humitos: Allow git tags with
/
in the name and properly slugify (#3545)@humitos: Allow to import public repositories on corporate site (#3537)
@davidfischer: Switch to universal analytics (#3495)
@agjohnson: Add docs on removing edit button (#3479)
@davidfischer: Convert default dev cache to local memory (#3477)
@agjohnson: Fix lint error (#3402)
@techtonik: Fix Edit links if version is referenced by annotated tag (#3302)
@jaraco: Fixed build results page on firefox (part two) (#2630)
Version 2.1.6
@davidfischer: Promo contrast improvements (#3549)
@humitos: Refactor run command outside a Build and Environment (#3542)
@AnatoliyURL: Project in the local read the docs don’t see tags. (#3534)
@johanneskoester: Build failed without details (#3531)
@danielmitterdorfer: “Edit on Github” points to non-existing commit (#3530)
@lk-geimfari: No such file or directory: ‘docs/requirements.txt’ (#3529)
@davidfischer: Switch to universal analytics (#3495)
@davidfischer: Convert default dev cache to local memory (#3477)
@nlgranger: Github service: cannot unlink after deleting account (#3374)
@andrewgodwin: “stable” appearing to track future release branches (#3268)
@chummels: RTD building old “stable” docs instead of “latest” when auto-triggered from recent push (#2351)
@gigster99: extension problem (#1059)
Version 2.1.5
@ericholscher: Add GSOC 2018 page (#3518)
@RichardLitt: Docs: Rename “Good First Bug” to “Good First Issue” (#3505)
@ericholscher: Check to make sure changes exist in Bitbucket pushes (#3480)
@andrewgodwin: “stable” appearing to track future release branches (#3268)
@Yaseenh: building project does not generate new pdf with changes in it (#2758)
@chummels: RTD building old “stable” docs instead of “latest” when auto-triggered from recent push (#2351)
@KeithWoods: GitHub edit link is aggressively stripped (#1788)
Version 2.1.4
@davidfischer: Add programming language to API/READTHEDOCS_DATA (#3499)
@ericholscher: Remove our mkdocs search override (#3496)
@davidfischer: Small formatting change to the Alabaster footer (#3491)
@ericholscher: Add David to dev team listing (#3485)
@ericholscher: Check to make sure changes exist in Bitbucket pushes (#3480)
@ericholscher: Use semvar for readthedocs-build to make bumping easier (#3475)
@davidfischer: Add programming languages (#3471)
@humitos: Remove TEMPLATE_LOADERS since it’s the default (#3469)
@ericholscher: Fix git (#3441)
@ericholscher: Properly slugify the alias on Project Relationships. (#3440)
@stsewd: Don’t show “build ideas” to unprivileged users (#3439)
@humitos: Do not use double quotes on git command with –format option (#3437)
@ericholscher: Hack in a fix for missing version slug deploy that went out a while back (#3433)
@humitos: Check versions used to create the venv and auto-wipe (#3432)
@ericholscher: Upgrade psycopg2 (#3429)
@ericholscher: Add celery theme to supported ad options (#3425)
@humitos: Link to version detail page from build detail page (#3418)
@humitos: Show/Hide “See paid advertising” checkbox depending on USE_PROMOS (#3412)
@benjaoming: Strip well-known version component origin/ from remote version (#3377)
@ericholscher: Add docker image from the YAML config integration (#3339)
@humitos: Show proper error to user when conf.py is not found (#3326)
@techtonik: Fix Edit links if version is referenced by annotated tag (#3302)
@Riyuzakii: changed <strong> from html to css (#2699)
Version 2.1.3
- date:
Dec 21, 2017
@ericholscher: Upgrade psycopg2 (#3429)
@ericholscher: Add celery theme to supported ad options (#3425)
@ericholscher: Only build travis push builds on master. (#3421)
@ericholscher: Add concept of dashboard analytics code (#3420)
@humitos: Use default avatar for User/Orgs in OAuth services (#3419)
@humitos: Link to version detail page from build detail page (#3418)
@bieagrathara: 019 497 8360 (#3416)
@bieagrathara: rew (#3415)
@humitos: Show/Hide “See paid advertising” checkbox depending on USE_PROMOS (#3412)
@humitos: Pin pylint to 1.7.5 and fix docstring styling (#3408)
@agjohnson: Update style and copy on abandonment docs (#3406)
@agjohnson: Update changelog more consistently (#3405)
@agjohnson: Update prerelease invoke command to call with explicit path (#3404)
@ericholscher: Fix changelog command (#3403)
@agjohnson: Fix lint error (#3402)
@julienmalard: Recent builds are missing translated languages links (#3401)
@humitos: Show connect buttons for installed apps only (#3394)
@agjohnson: Fix display of build advice (#3390)
@agjohnson: Don’t display the build suggestions div if there are no suggestions (#3389)
@ericholscher: Pass more data into the redirects. (#3388)
@ericholscher: Fix issue where you couldn’t edit your canonical domain. (#3387)
@benjaoming: Strip well-known version component origin/ from remote version (#3377)
@JavaDevVictoria: Updated python.setup_py_install to be true (#3357)
@humitos: Use default avatars for GitLab/GitHub/Bitbucket integrations (users/organizations) (#3353)
@jonrkarr: Error in YAML configuration docs: default value for
python.setup_py_install
should betrue
(#3334)@humitos: Show proper error to user when conf.py is not found (#3326)
@MikeHart85: Badges aren’t updating due to being cached on GitHub. (#3323)
@techtonik: Fix Edit links if version is referenced by annotated tag (#3302)
@dialex: Build passed but I can’t see the documentation (maze screen) (#3246)
@makixx
: Account is inactive (#3241)@agjohnson: Cleanup misreported failed builds (#3230)
@agjohnson: Remove copyright application (#3199)
@shacharoo: Unable to register after deleting my account (#3189)
@gtalarico: 3 week old Build Stuck Cloning (#3126)
@agjohnson: Regressions with conf.py and error reporting (#2963)
@agjohnson: Can’t edit canonical domain (#2922)
@Riyuzakii: changed <strong> from html to css (#2699)
@tjanez: Support specifying ‘python setup.py build_sphinx’ as an alternative build command (#1857)
Version 2.1.2
@agjohnson: Update changelog more consistently (#3405)
@agjohnson: Update prerelease invoke command to call with explicit path (#3404)
@agjohnson: Fix lint error (#3402)
@humitos: Show connect buttons for installed apps only (#3394)
@agjohnson: Don’t display the build suggestions div if there are no suggestions (#3389)
@jonrkarr: Error in YAML configuration docs: default value for
python.setup_py_install
should betrue
(#3334)@agjohnson: Cleanup misreported failed builds (#3230)
@agjohnson: Remove copyright application (#3199)
Version 2.1.1
Release information missing
Version 2.1.0
@ericholscher: Revert “Merge pull request #3336 from readthedocs/use-active-for-stable” (#3368)
@agjohnson: Revert “Do not split before first argument (#3333)” (#3366)
@ericholscher: Remove pitch from ethical ads page, point folks to actual pitch page. (#3365)
@agjohnson: Add changelog and changelog automation (#3364)
@ericholscher: Fix mkdocs search. (#3361)
@ericholscher: Email sending: Allow kwargs for other options (#3355)
@ericholscher: Try and get folks to put more tags. (#3350)
@ericholscher: Suggest wiping your environment to folks with bad build outcomes. (#3347)
@jimfulton: Draft policy for claiming existing project names. (#3314)
@agjohnson: More logic changes to error reporting, cleanup (#3310)
@safwanrahman: [Fix #3182] Better user deletion (#3214)
@ericholscher: Better User deletion (#3182)
@RichardLitt: Add
Needed: replication
label (#3138)@josejrobles: Replaced usage of deprecated function get_fields_with_model with new … (#3052)
@ericholscher: Don’t delete the subprojects directory on sync of superproject (#3042)
@andrew: Pass query string when redirecting, fixes #2595 (#3001)
@destroyerofbuilds
: Setup GitLab Web Hook on Project Import (#1443)@takotuesday: Add GitLab Provider from django-allauth (#1441)
Version 2.0
@ericholscher: Email sending: Allow kwargs for other options (#3355)
@ericholscher: Try and get folks to put more tags. (#3350)
@ericholscher: Small changes to email sending to enable from email (#3349)
@dplanella: Duplicate TOC entries (#3345)
@ericholscher: Small tweaks to ethical ads page (#3344)
@agjohnson: Fix python usage around oauth pagination (#3342)
@ericholscher: Change stable version switching to respect
active
(#3336)@ericholscher: Allow superusers to pass admin & member tests for projects (#3335)
@humitos: Take preferece of tags over branches when selecting the stable version (#3331)
@andrewgodwin: “stable” appearing to track future release branches (#3268)
@jakirkham: Specifying conda version used (#2076)
@agjohnson: Document code style guidelines (#1475)
Previous releases
Starting with version 2.0
, we will be incrementing the Read the Docs version
based on semantic versioning principles, and will be automating the update of
our changelog.
Below are some historical changes from when we have tried to add information here in the past
July 23, 2015
Django 1.8 Support Merged
Code notes
Updated Django from
1.6.11
to1.8.3
.Removed South and ported the South migrations to Django’s migration framework.
Updated django-celery from
3.0.23
to3.1.26
as django-celery 3.0.x does not support Django 1.8.Updated Celery from
3.0.24
to3.1.18
because we had to update django-celery. We need to test this extensively and might need to think about using the new Celery API directly and dropping django-celery. See release notes: https://docs.celeryproject.org/en/3.1/whatsnew-3.1.htmlUpdated tastypie from
0.11.1
to current master (commit1e1aff3dd4dcd21669e9c68bd7681253b286b856
) as 0.11.x is not compatible with Django 1.8. No surprises expected but we should ask for a proper release, see release notes: https://github.com/django-tastypie/django-tastypie/blob/master/docs/release_notes/v0.12.0.rstUpdated django-oauth from
0.16.1
to0.21.0
. No surprises expected, see release notes in the docs and finer grained in the repoUpdated django-guardian from
1.2.0
to1.3.0
to gain Django 1.8 support. No surprises expected, see release notes: https://github.com/lukaszb/django-guardian/blob/devel/CHANGESUsing
django-formtools
instead of removeddjango.contrib.formtools
now. Based on the Django release notes, these modules are the same except of the package name.Updated pytest-django from
2.6.2
to2.8.0
. No tests required, but running the testsuite :smile:Updated psycopg2 from 2.4 to 2.4.6 as 2.4.5 is required by Django 1.8. No trouble expected as Django is the layer between us and psycopg2. Also it’s only a minor version upgrade. Release notes: http://initd.org/psycopg/docs/news.html#what-s-new-in-psycopg-2-4-6
Added
django.setup()
toconf.py
to load django properly for doc builds.Added migrations for all apps with models in the
readthedocs/
directory
Deployment notes
After you have updated the code and installed the new dependencies, you need to run these commands on the server:
python manage.py migrate contenttypes
python manage.py migrate projects 0002 --fake
python manage.py migrate --fake-initial
Locally I had trouble in a test environment that pip did not update to the specified commit of tastypie. It might be required to use pip install -U -r requirements/deploy.txt
during deployment.
Development update notes
The readthedocs developers need to execute these commands when switching to this branch (or when this got merged into main
):
Before updating please make sure that all migrations are applied:
python manage.py syncdb python manage.py migrate
Update the codebase:
git pull
You need to update the requirements with
pip install -r requirements.txt
Now you need to fake the initial migrations:
python manage.py migrate contenttypes python manage.py migrate projects 0002 --fake python manage.py migrate --fake-initial
About Read the Docs
Read the Docs is a C Corporation registered in Oregon. Our bootstrapped company is owned and fully controlled by the founders, and fully funded by our customers and advertisers. This allows us to focus 100% on our users.
We have two main sources of revenue:
Read the Docs for Business - where we provide a valuable paid service to companies.
Read the Docs Community - where we provide a free service to the open source community, funded via EthicalAds.
We believe that having both paying customers and ethical advertising is the best way to create a sustainable platform for our users. We have built something that we expect to last a long time, and we are able to make decisions based only on the best interest of our community and customers.
All of the source code for Read the Docs is open source. You are welcome to contribute the features you want or run your own instance. We should note that we generally only support our hosted versions as a matter of our philosophy.
We owe a great deal to the open source community that we are a part of, so we provide free ads via our community ads program. This allows us to give back to the communities and projects that we support and depend on.
We are proud about the way we manage our company and products, and are glad to have you on board with us in this great documentation journey.
If you want to dive more into more specific information and our policies, we’ve brought most of the most important ones below.
- ⏩ Business hosting
Learn more about how our company provides paid solutions
- ⏩ Policies and legal documents
Policies and legal documents used by Read the Docs Community and Read the Docs for Business.
- ⏩ Advertising
Information about how advertisement in Read the Docs
- ⏩ The story of Read the Docs
A brief throwback to how we were founded
- ⏩ Sponsors of Read the Docs
Read about who currently sponsors Read the Docs and who sponsored us in the past.
- ⏩ Read the Docs open source philosophy
Our philosophy is anchored in open source.
- ⏩ Read the Docs team
How we work and who we are.
- ⏩ Site support
Read this before asking for help: How to get support and where.
- ⏩ Glossary
A useful index of terms used in our docs
See also
- Our website
Our primary website has general-purpose information about Read the Docs like pricing and feature overviews.
Policies and legal documents
Here is some of the fine print used by Read the Docs Community and Read the Docs for Business:
Abandoned projects policy
This policy describes the process by which a Read the Docs project slug may be changed.
Tip
If you want to de-list a project’s fork from search results, please see Unofficial and unmaintained projects policy.
Rationale
Conflict between the current use of the name and a different suggested use of the same name occasionally arise. This document aims to provide general guidelines for solving the most typical cases of such conflicts.
Specification
The main idea behind this policy is that Read the Docs serves the community. Every user is invited to upload content under the Terms of Use, understanding that it is at the sole risk of the user.
While Read the Docs is not a backup service, the core team of Read the Docs does their best to keep that content accessible indefinitely in its published form. However, in certain edge cases the greater community’s needs might outweigh the individual’s expectation of ownership of a project name.
The use cases covered by this policy are:
- Abandoned projects
Renaming a project so that the original project name can be used by a different project
- Active projects
Resolving disputes over a name
Implementation
Reachability
The user of Read the Docs is solely responsible for being reachable by the core team for matters concerning projects that the user owns. In every case where contacting the user is necessary, the core team will try to do so, using the following means of contact:
E-mail address on file in the user’s profile
E-mail addresses found in the given project’s documentation
E-mail address on the project’s home page
The core team will stop trying to reach the user after six weeks and the user will be considered unreachable.
Abandoned projects
A project is considered abandoned when ALL of the following are met:
Owner is unreachable (see Reachability)
The project has no proper documentation being served (no successful builds) or does not have any releases within the past twelve months
No activity from the owner on the project’s home page (or no home page found).
All other projects are considered active.
Renaming of an abandoned project
Projects are never renamed solely on the basis of abandonment.
An abandoned project can be renamed (by appending -abandoned
and a
uniquifying integer if needed) for purposes of reusing the name when ALL of the
following are met:
The project has been determined abandoned by the rules described above
The candidate is able to demonstrate their own failed attempts to contact the existing owner
The candidate is able to demonstrate that the project suggested to reuse the name already exists and meets notability requirements
The candidate is able to demonstrate why a fork under a different name is not an acceptable workaround
The project has fewer than 100 monthly pageviews
The core team does not have any additional reservations.
Reporting an abandoned project
You can report an abandoned project according to this policy by contacting our Site support.
Please include the following information:
URL of abandoned documentation project: ...
URL of abandoned project's repository (if any): ...
URL of abandoned project's website (if any): ...
Are you suggesting that an alternative project should take over the
name (slug) abandoned project? (y/n)
URL of alternative documentation (if any): ...
URL of alternative website (if any): ...
URL of alternative repository (if any): ...
Describe attempts of reaching the owner(s) of the abandoned project:
...
Name conflict resolution for active projects
The core team of Read the Docs are not arbiters in disputes around active projects. The core team recommends users to get in touch with each other and solve the issue by respectful communication.
Prior art
The Python Package Index (PyPI) policy for claiming abandoned packages (PEP-0541) heavily influenced this policy.
Unofficial and unmaintained projects policy
This policy describes a process where we take actions against unmaintained and unofficial forks of project documentation.
Tip
If you want to free up a project’s slug and gain access over it, please see Abandoned projects policy.
Rationale
Documentation projects may be kept online indefinitely, even though a newer version of the same project exists elsewhere. There are many reasons this can happen, including forks, old official docs that are unmaintained, and many other situations.
The problem with old, outdated docs is that users will find them in search results, and get confused to the validity of them. Projects will then get support requests from people who are using an old and incorrect documentation version.
We have this policy to allow a reporter to request the delisting of forks that are old and outdated.
High level overview
The process at a high level looks like:
A reporter contacts us about a project they think is outdated and unofficial
A Read the Docs team member evaluates it to make sure it’s outdated and unofficial, according to this policy
We delist this project from search results and send an email to owners of the Read the Docs project
If a project owner objects, we evaluate their evidence and make a final decision
Definitions
Unofficial projects
A project is considered unofficial when it is not linked to or mentioned in any of these places:
Websites and domains associated with the project
The project’s primary repository – README files, repository description, or source code
Unmaintained projects
A project is considered unmaintained when any of the following are met:
The configured version control repository is unreadable. This can happen if the repository is deleted, credentials are broken or the Git host is permanently unresponsive.
The project is only serving content from releases and commits 6 months or older.
All builds have failed for more than 6 months.
Implementation
Requesting a project be delisted
You can request that we delist an outdated, unmaintained documentation by contacting our Site support.
Please include the following information:
URL of unofficial and unmaintained documentation project: ...
URL of official documentation (if any): ...
URL of official project website (if any): ...
URL of official project repository (if any): ...
Describe attempts of reaching the owner(s) of the documentation project:
...
Delisting
Projects that are determined to be unmaintained and unofficial will have a robots.txt
file added that removes them from all search results:
# robots.txt
User-agent: *
# This project is delisted according to the Unofficial and Unmaintanied Project Policy
# https://docs.readthedocs.io/en/stable/unofficial-projects.html
Disallow: /
Projects will be delisted if they meet all of the following criteria:
The person who submits the report of the unmaintained and unofficial project also demonstrates failed attempts to contact the existing owners.
The project has been determined unmaintained and unofficial by the rules described above.
The core team does not have any additional reservations.
The Read the Docs team will do the following actions when a project is delisted:
Notify the Read the Docs project owners via email about the delisting.
Add the
robots.txt
file to be served on the project domain.
If any of the project owners respond, their response will be taken into account, and the delisting might be reversed.
Thinking behind the policy
The main idea behind this policy is that Read the Docs serves the community. Every user is invited to upload content under Read the Docs Terms of Service, understanding that it is at the sole risk of the user.
While Read the Docs is not a backup service, the core team of Read the Docs does their best to keep content accessible indefinitely in its published form. However, in certain edge cases, the greater community’s needs might outweigh the individual’s expectation of continued publishing.
Prior art
This policy is inspired by our Abandoned projects policy. The Python Package Index (PyPI) policy for claiming abandoned packages (PEP-0541) heavily influenced this policy.
Privacy Policy
Effective date: February 21, 2023
Welcome to Read the Docs. At Read the Docs, we believe in protecting the privacy of our users, authors, and readers.
The short version
We collect your information only with your consent; we only collect the minimum amount of personal information that is necessary to fulfill the purpose of your interaction with us; we don’t sell it to third parties; and we only use it as this Privacy Policy describes.
Of course, the short version doesn’t tell you everything, so please read on for more details!
Our services
Read the Docs is made up of:
- readthedocs.org (“Read the Docs Community”)
This is a website aimed at documentation authors and project maintainers writing and distributing technical documentation. This Privacy Policy applies to this site in full without reservation.
- readthedocs.com (“Read the Docs for Business”)
This website is a commercial hosted offering for hosting private documentation for corporate clients. This Privacy Policy applies to this site in full without reservation.
- readthedocs.io, readthedocs-hosted.com, and other domains (“Documentation Sites”)
These websites are where Read the Docs hosts documentation (”User-Generated Content”) on behalf of documentation authors. A best effort is made to apply this Privacy Policy to these sites but the documentation may contain content and files created by documentation authors.
All use of Read the Docs is subject to this Privacy Policy, together with our Terms of service.
What information Read the Docs collects and why
Information from website browsers
If you’re just browsing the website, we collect the same basic information that most websites collect. We use common internet technologies, such as cookies and web server logs. We collect this basic information from everybody, whether they have an account or not.
The information we collect about all visitors to our website includes:
the visitor’s browser type
language preference
referring site
the date and time of each visitor request
We also collect potentially personally-identifying information like Internet Protocol (IP) addresses.
Why do we collect this?
We collect this information to better understand how our website visitors use Read the Docs, and to monitor and protect the security of the website.
Information from users with accounts
If you create an account, we require some basic information at the time of account creation. You will create your own user name and password, and we will ask you for a valid email account. You also have the option to give us more information if you want to, and this may include “User Personal Information.”
“User Personal Information” is any information about one of our users which could, alone or together with other information, personally identify him or her. Information such as a user name and password, an email address, a real name, and a photograph are examples of “User Personal Information.”
User Personal Information does not include aggregated, non-personally identifying information. We may use aggregated, non-personally identifying information to operate, improve, and optimize our website and service.
Why do we collect this information?
We need your User Personal Information to create your account, and to provide the services you request.
We use your User Personal Information, specifically your user name, to identify you on Read the Docs.
We use it to fill out your profile and share that profile with other users.
We will use your email address to communicate with you but it is not shared publicly.
We limit our use of your User Personal Information to the purposes listed in this Privacy Statement. If we need to use your User Personal Information for other purposes, we will ask your permission first. You can always see what information we have in your user account.
What information Read the Docs does not collect
We do not intentionally collect sensitive personal information, such as social security numbers, genetic data, health information, or religious information.
Documentation Sites hosted on Read the Docs are public, anyone (including us) may view their contents. If you have included private or sensitive information in your Documentation Site, such as email addresses, that information may be indexed by search engines or used by third parties.
Read the Docs for Business may host private projects which we treat as confidential and we only access them for support reasons, with your consent, or if required to for security reasons
If you’re a child under the age of 13, you may not have an account on Read the Docs. Read the Docs does not knowingly collect information from or direct any of our content specifically to children under 13. If we learn or have reason to suspect that you are a user who is under the age of 13, we will unfortunately have to close your account. We don’t want to discourage you from writing software documentation, but those are the rules.
How Read the Docs secures your information
Read the Docs takes all measures reasonably necessary to protect User Personal Information from unauthorized access, alteration, or destruction; maintain data accuracy; and help ensure the appropriate use of User Personal Information. We follow generally accepted industry standards to protect the personal information submitted to us, both during transmission and once we receive it.
No method of transmission, or method of electronic storage, is 100% secure. Therefore, we cannot guarantee its absolute security.
Read the Docs’ global privacy practices
Information that we collect will be stored and processed in the United States in accordance with this Privacy Policy. However, we understand that we have users from different countries and regions with different privacy expectations, and we try to meet those needs.
We provide the same standard of privacy protection to all our users around the world, regardless of their country of origin or location, Additionally, we require that if our vendors or affiliates have access to User Personal Information, they must comply with our privacy policies and with applicable data privacy laws.
In particular:
Read the Docs provides clear methods of unambiguous, informed consent at the time of data collection, when we do collect your personal data.
We collect only the minimum amount of personal data necessary, unless you choose to provide more. We encourage you to only give us the amount of data you are comfortable sharing.
We offer you simple methods of accessing, correcting, or deleting the data we have collected.
We also provide our users a method of recourse and enforcement.
Resolving complaints
If you have concerns about the way Read the Docs is handling your User Personal Information, please let us know immediately by emailing us at privacy@readthedocs.org.
How we respond to compelled disclosure
Read the Docs may disclose personally-identifying information or other information we collect about you to law enforcement in response to a valid subpoena, court order, warrant, or similar government order, or when we believe in good faith that disclosure is reasonably necessary to protect our property or rights, or those of third parties or the public at large.
In complying with court orders and similar legal processes, Read the Docs strives for transparency. When permitted, we will make a reasonable effort to notify users of any disclosure of their information, unless we are prohibited by law or court order from doing so, or in rare, exigent circumstances.
How you can access and control the information we collect
If you’re already a Read the Docs user, you may access, update, alter, or delete your basic user profile information by editing your user account.
Data retention and deletion
Read the Docs will retain User Personal Information for as long as your account is active or as needed to provide you services.
We may retain certain User Personal Information indefinitely, unless you delete it or request its deletion. For example, we don’t automatically delete inactive user accounts, so unless you choose to delete your account, we will retain your account information indefinitely.
If you would like to delete your User Personal Information, you may do so in your user account. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile.
Our web server logs for Read the Docs Community, Read the Docs for Business, and Documentation Sites are deleted after 10 days barring legal obligations.
Changes to our Privacy Policy
We reserve the right to revise this Privacy Policy at any time. If we change this Privacy Policy in the future, we will post the revised Privacy Policy and update the “Effective Date,” above, to reflect the date of the changes.
Contacting Read the Docs
Questions regarding Read the Docs’ Privacy Policy or information practices should be directed to privacy@readthedocs.org.
Security policy
Read the Docs adheres to the following security policies and procedures with regards to development, operations, and managing infrastructure. You can also find information on how we handle specific user data in our Privacy Policy.
Our engineering team monitors several sources for security threats and responds accordingly to security threats and notifications.
We monitor 3rd party software included in our application and in our infrastructure for security notifications. Any relevant security patches are applied and released immediately.
We monitor our infrastructure providers for signs of attacks or abuse and will respond accordingly to threats.
Infrastructure
Read the Docs infrastructure is hosted on Amazon Web Services (AWS). We also use Cloudflare services to mitigate attacks and abuse.
Data and data center
All user data is stored in the USA in multi-tenant datastores in Amazon Web Services data centers. Physical access to these data centers is secured with a variety of controls to prevent unauthorized access.
Application
- Encryption in transit
All documentation, application dashboard, and API access is transmitted using SSL encryption. We do not support unencrypted requests, even for public project documentation hosting.
- Temporary repository storage
We do not store or cache user repository data, temporary storage is used for every project build on Read the Docs.
- Authentication
Read the Docs supports SSO with GitHub, GitLab, Bitbucket, and Google Workspaces (formerly G Suite).
- Payment security
We do not store or process any payment details. All payment information is stored with our payment provider, Stripe – a PCI-certified level 1 payment provider.
Engineering and operational practices
- Immutable infrastructure
We don’t make live changes to production code or infrastructure. All changes to our application and our infrastructure go through the same code review process before being applied and released.
- Continuous integration
We are constantly testing changes to our application code and operational changes to our infrastructure.
- Incident response
Our engineering team is on a rotating on-call schedule to respond to security or availability incidents.
Account security
All traffic is encrypted in transit so your login is protected.
Read the Docs stores only one-way hashes of all passwords. Nobody at Read the Docs has access to your passwords.
Account login is protected from brute force attacks with rate limiting.
While most projects and docs on Read the Docs are public, we treat your private repositories and private documentation as confidential and Read the Docs employees may only view them with your explicit permission in response to your support requests, or when required for security purposes.
You can read more about account privacy in our Privacy Policy.
Security reports
Security is very important to us at Read the Docs. We follow generally accepted industry standards to protect the personal information submitted to us, both during transmission and once we receive it. In the spirit of transparency, we are committed to responsible reporting and disclosure of security issues.
See also
- Security policy
Read our policy for security, which we base our security handling and reporting on.
Supported versions
Only the latest version of Read the Docs will receive security updates. We don’t support security updates for custom installations of Read the Docs.
Reporting a security issue
If you believe you’ve discovered a security issue at Read the Docs, please contact us at security@readthedocs.org (optionally using our PGP key). We request that you please not publicly disclose the issue until it has been addressed by us.
You can expect:
We will respond acknowledging your email typically within one business day.
We will follow up if and when we have confirmed the issue with a timetable for the fix.
We will notify you when the issue is fixed.
We will create a GitHub advisory and publish it when the issue has been fixed and deployed in our platforms.
PGP key
You may use this PGP key
to securely communicate with us and to verify signed messages you receive from us.
Bug bounties
While we sincerely appreciate and encourage reports of suspected security problems, please note that the Read the Docs is an open source project, and does not run any bug bounty programs.
Security issue archive
You can see all past reports at https://github.com/readthedocs/readthedocs.org/security/advisories.
Version 3.2.0
Version 3.2.0 resolved an issue where a specially crafted request could result in a DNS query to an arbitrary domain.
This issue was found by Cyber Smart Defence who reported it as part of a security audit to a firm running a local installation of Read the Docs.
Release 2.3.0
Version 2.3.0 resolves a security issue with translations on our community hosting site that allowed users to modify the hosted path of a target project by adding it as a translation project of their own project. A check was added to ensure project ownership before adding the project as a translation.
In order to add a project as a translation now, users must now first be granted ownership in the translation project.
Read the Docs Terms of Service
Effective date: September 30, 2019
Thank you for using Read the Docs! We’re happy you’re here. Please read this Terms of Service agreement carefully before accessing or using Read the Docs. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms.
Definitions
Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There’s not going to be a test on it, but it’s still useful information.
The “Agreement” refers, collectively, to all the terms, conditions, notices contained or referenced in this document (the “Terms of Service” or the “Terms”) and all other operating rules, policies (including our Privacy Policy) and procedures that we may publish from time to time on our sites.
Our “Service” or “Services” refers to the applications, software, products, and services provided by Read the Docs (see Our services).
The “Website” or “Websites” refers to Read the Docs’ websites located at readthedocs.org, readthedocs.com, Documentation Sites, and all content, services, and products provided by Read the Docs at or through those Websites.
“The User,” “You,” and “Your” refer to the individual person, company, or organization that has visited or is using ours Websites or Services; that accesses or uses any part of the Account; or that directs the use of the Account in the performance of its functions. A User must be at least 13 years of age.
“Read the Docs,” “We,” and “Us” refer to Read the Docs, Inc., as well as our affiliates, directors, subsidiaries, contractors, licensors, officers, agents, and employees.
“Content” refers to content featured or displayed through the Websites, including without limitation text, data, articles, images, photographs, graphics, software, applications, designs, features, and other materials that are available on our Websites or otherwise available through our Services. “Content” also includes Services. “User-Generated Content” is Content, written or otherwise, created or uploaded by our Users. “Your Content” is Content that you create or own.
An “Account” represents your legal relationship with Read the Docs. A “User Account” represents an individual User’s authorization to log in to and use the Service and serves as a User’s identity on Read the Docs. “Organizations” are shared workspaces that may be associated with a single entity or with one or more Users where multiple Users can collaborate across many projects at once. A User Account can be a member of any number of Organizations.
“User Personal Information” is any information about one of our users which could, alone or together with other information, personally identify him or her. Information such as a user name and password, an email address, a real name, and a photograph are examples of User Personal Information. Our Privacy Policy goes into more details on User Personal Information, what data Read the Docs collects, and why we collect it.
Our services
Read the Docs is made up of the following Websites:
- readthedocs.org (“Read the Docs Community”)
This Website is used by documentation authors and project maintainers for writing and distributing technical documentation.
- readthedocs.com (“Read the Docs for Business”)
This Website is a commercial hosted offering for hosting private documentation for corporate clients.
- readthedocs.io, readthedocs-hosted.com, and other domains (“Documentation Sites”)
These Websites are where Read the Docs hosts User-Generated Content on behalf of documentation authors.
Account terms
Short version: User Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; and you must provide a valid email address. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure.
Account controls
- Users
Subject to these Terms, you retain ultimate administrative control over your User Account and the Content within it.
- Organizations
The “owner” of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within our Services, an owner can manage User access to the Organization’s data and projects. An Organization may have multiple owners, but there must be at least one User Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization.
Required information
You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes.
Account requirements
We have a few simple rules for User Accounts on Read the Docs’ Services.
You must be a human to create an Account. Accounts registered by “bots” or other automated methods are not permitted. We do permit machine accounts:
A machine account is an Account set up by an individual human who accepts the Terms on behalf of the Account, provides a valid email address, and is responsible for its actions. A machine account is used exclusively for performing automated tasks. Multiple users may direct the actions of a machine account, but the owner of the Account is ultimately responsible for the machine’s actions.
You must be age 13 or older. While we are thrilled to see brilliant young developers and authors get excited by learning to program, we must comply with United States law. Read the Docs does not target our Services to children under 13, and we do not permit any Users under 13 on our Service. If we learn of any User under the age of 13, we will have to close your account. If you are a resident of a country outside the United States, your country’s minimum age may be older; in such a case, you are responsible for complying with your country’s laws.
You may not use Read the Docs in violation of export control or sanctions laws of the United States or any other applicable jurisdiction. You may not use Read the Docs if you are or are working on behalf of a Specially Designated National (SDN) or a person subject to similar blocking or denied party prohibitions administered by a U.S. government agency. Read the Docs may allow persons in certain sanctioned countries or territories to access certain Read the Docs services pursuant to U.S. government authorizations.
User Account security
You are responsible for keeping your Account secure while you use our Service.
You are responsible for all content posted and activity that occurs under your Account.
You are responsible for maintaining the security of your Account and password. Read the Docs cannot and will not be liable for any loss or damage from your failure to comply with this security obligation.
You will promptly notify Read the Docs if you become aware of any unauthorized use of, or access to, our Services through your Account, including any unauthorized use of your password or Account.
Additional terms
In some situations, third parties’ terms may apply to your use of Read the Docs. For example, you may be a member of an organization on Read the Docs with its own terms or license agreements; or you may download an application that integrates with Read the Docs. Please be aware that while these Terms are our full agreement with you, other parties’ terms govern their relationships with you.
Acceptable use
Short version: Read the Docs hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other.
Your use of our Websites and Services must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations.
User-Generated Content
Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content and documentation you post. You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to.
Responsibility for User-Generated Content
You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content.
Read the Docs may remove Content
We do not pre-screen User-Generated Content, but we have the right (though not the obligation) to refuse or remove any User-Generated Content that, in our sole discretion, violates any Read the Docs terms or policies.
Ownership of Content, right to post, and license grants
You retain ownership of and responsibility for Your Content. If you’re posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post.
Because you retain ownership of and responsibility for Your Content, we need you to grant us — and other Read the Docs Users — certain legal permissions, listed below (in License grant to us, License grant to other users and Moral rights). These license grants apply to Your Content. If you upload Content that already comes with a license granting Read the Docs the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted. The licenses you grant to us will end when you remove Your Content from our servers.
License grant to us
We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, parse, and display Your Content, and make incidental copies as necessary to render the Website and provide the Service. This includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video.
This license does not grant Read the Docs the right to sell Your Content or otherwise distribute or use it outside of our provision of the Service.
License grant to other users
Any User-Generated Content you post publicly may be viewed by others. By setting your projects to be viewed publicly, you agree to allow others to view your Content.
On Read the Docs Community, all Content is public.
Moral rights
You retain all moral rights to Your Content that you upload, publish, or submit to any part of our Services, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in License grant to us, but not otherwise.
To the extent this agreement is not enforceable by applicable law, you grant Read the Docs the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render our Websites and provide our Services.
Private projects
Short version: You may connect Read the Docs for Business to your private repositories or host documentation privately. We treat the content of these private projects as confidential, and we only access it for support reasons, with your consent, or if required to for security reasons.
Confidentiality of private projects
Read the Docs considers the contents of private projects to be confidential to you. Read the Docs will protect the contents of private projects from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care.
Access
Read the Docs employees may only access the content of your private projects in the following situations:
With your consent and knowledge, for support reasons. If Read the Docs accesses a private project for support reasons, we will only do so with the owner’s consent and knowledge.
When access is required for security reasons, including when access is required to maintain ongoing confidentiality, integrity, availability and resilience of Read the Docs’ systems and Services.
Exclusions
If we have reason to believe the contents of a private project are in violation of the law or of these Terms, we have the right to access, review, and remove them. Additionally, we may be compelled by law to disclose the contents of your private projects.
Copyright infringement and DMCA policy
If you believe that content on our website violates your copyright or other rights, please contact us in accordance with our Digital Millennium Copyright Act Policy. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses.
We will terminate the Accounts of repeat infringers of this policy.
Intellectual property notice
Short version: We own the Service and all of our Content. In order for you to use our Content, we give you certain rights to it, but you may only use our Content in the way we have allowed.
Read the Docs’ rights to content
Read the Docs and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to our Websites and Services. We reserve all rights that are not expressly granted to you under this Agreement or by law.
Read the Docs trademarks and logos
If you’d like to use Read the Docs’s trademarks, you must follow all of our trademark guidelines.
API terms
Short version: You agree to these Terms of Service, plus this Section, when using any of Read the Docs’ APIs (Application Provider Interface), including use of the API through a third party product that accesses Read the Docs.
No abuse or overuse of the API
Abuse or excessively frequent requests to Read the Docs via the API may result in the temporary or permanent suspension of your Account’s access to the API. Read the Docs, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension.
You may not share API tokens to exceed Read the Docs’ rate limitations.
You may not use the API to download data or Content from Read the Docs for spamming purposes, including for the purposes of selling Read the Docs users’ personal information, such as to recruiters, headhunters, and job boards.
All use of the Read the Docs API is subject to these Terms of Service and our Privacy Policy.
Read the Docs may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of Read the Docs’ Service.
Additional terms for Documentation Sites
Short version: Documentation Sites on Read the Docs are subject to certain rules, in addition to the rest of the Terms.
Documentation Sites
Each Read the Docs Account comes with the ability to host Documentation Sites. This hosting service is intended to host static web pages for All Users. Documentation Sites are subject to some specific bandwidth and usage limits, and may not be appropriate for some high-bandwidth uses or other prohibited uses.
Third party applications
Short version: You need to follow certain rules if you create an application for other Users.
Creating applications
If you create a third-party application or other developer product that collects User Personal Information or User-Generated Content and integrates with the Service through Read the Docs’ API, OAuth mechanism, or otherwise (“Developer Product”), and make it available for other Users, then you must comply with the following requirements:
You must comply with this Agreement and our Privacy Policy.
Except as otherwise permitted, such as by law or by a license, you must limit your usage of the User Personal Information or User-Generated Content you collect to that purpose for which the User has authorized its collection.
You must take all reasonable security measures appropriate to the risks, such as against accidental or unlawful destruction, or accidental loss, alteration, unauthorized disclosure or access, presented by processing the User Personal Information or User-Generated Content.
You must not hold yourself out as collecting any User Personal Information or User-Generated Content on Read the Docs’ behalf, and provide sufficient notice of your privacy practices to the User, such as by posting a privacy policy.
You must provide Users with a method of deleting any User Personal Information or User-Generated Content you have collected through Read the Docs after it is no longer needed for the limited and specified purposes for which the User authorized its collection, except where retention is required by law or otherwise permitted, such as through a license.
Advertising on Documentation Sites
Short version: We do not generally prohibit use of Documentation Sites for advertising. However, we expect our users to follow certain limitations, so Read the Docs does not become a spam haven. No one wants that.
Our advertising
We host advertising on Documentation Sites on Read the Docs Community. This advertising is first-party advertising hosted by Read the Docs. We do not run any code from advertisers and all ad images are hosted on Read the Docs’ servers. For more details, see our document on Advertising details.
Acceptable advertising on Documentation Sites
We offer Documentation Sites primarily as a showcase for personal and organizational projects. Some project monetization efforts are permitted on Documentation Sites, such as donation buttons and crowdfunding links.
Spamming and inappropriate use of Read the Docs
Advertising Content, like all Content, must not violate the law or these Terms of Use, for example through excessive bulk activity such as spamming. We reserve the right to remove any projects that, in our sole discretion, violate any Read the Docs terms or policies.
Payment
Short version: You are responsible for any fees associated with your use of Read the Docs. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change.
Pricing
Our pricing and payment terms are available at https://readthedocs.com/pricing/. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term.
Upgrades, downgrades, and changes
We will immediately bill you when you upgrade from the free plan to any paying plan (either Read the Docs for Business or a Gold membership).
If you change from a monthly billing plan to a yearly billing plan, Read the Docs will bill you for a full year at the next monthly billing date.
If you upgrade to a higher level of service, we will bill you for the upgraded plan immediately.
You may change your level of service at any time by going into your billing settings. If you choose to downgrade your Account, you may lose access to Content, features, or capacity of your Account.
Billing schedule; no refunds
For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period.
Exceptions to these rules are at Read the Docs’ sole discretion.
Responsibility for payment
You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay Read the Docs any charge incurred in connection with your use of the Service. If you dispute the matter, contact us. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information.
Cancellation and termination
Short version: You may close your Account at any time. If you do, we’ll treat your information responsibly.
Account cancellation
It is your responsibility to properly cancel your Account with Read the Docs. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. We are not able to cancel Accounts in response to an email or phone request.
Upon cancellation
We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination. This information can not be recovered once your Account is cancelled.
Read the Docs may terminate
Read the Docs has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. Read the Docs reserves the right to refuse service to anyone for any reason at any time.
Survival
All provisions of this Agreement which, by their nature, should survive termination will survive termination – including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability.
Communications with Read the Docs
Short version: We use email and other electronic means to stay in touch with our users.
Electronic communication required
For contractual purposes, you:
Consent to receive communications from us in an electronic form via the email address you have submitted or via the Service
Agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable rights.
Legal notice to Read the Docs must be in writing
Communications made through email or Read the Docs’ support system will not constitute legal notice to Read the Docs or any of its officers, employees, agents or representatives in any situation where notice to Read the Docs is required by contract or any law or regulation. Legal notice to Read the Docs must be in writing.
No phone support
Read the Docs only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support.
Disclaimer of warranties
Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect.
Read the Docs provides the Website and the Service “as is” and “as available,” without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement.
Read the Docs does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service.
Limitation of liability
Short version: We will not be liable for damages or losses arising from your use or inability to use the Service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you.
You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from:
the use, disclosure, or display of your User-Generated Content;
your use or inability to use the Service;
any modification, price change, suspension or discontinuance of the Service;
the Service generally or the software or systems that make the Service available;
unauthorized access to or alterations of your transmissions or data;
statements or conduct of any third party on the Service;
any other user interactions that you input or receive through your use of the Service; or
any other matter relating to the Service.
Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control.
Release and indemnification
Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved.
If you have a dispute with one or more Users, you agree to release Read the Docs from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes.
You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys’ fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that Read the Docs:
Promptly gives you written notice of the claim, demand, suit or proceeding
Gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases Read the Docs of all liability)
Provides to you all reasonable assistance, at your expense.
Changes to these terms
Short version: We want our users to be informed of important changes to our terms, but some changes aren’t that important — we don’t want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any changes that affect your rights and give you time to adjust to them.
We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price changes, at least 30 days prior to the change taking effect by posting a notice on our Website. For non-material modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service.
We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice.
Miscellaneous
Governing law
Except to the extent applicable law provides otherwise, this Agreement between you and Read the Docs and any access to or use of our Websites or our Services are governed by the federal laws of the United States of America and the laws of the State of Oregon, without regard to conflict of law provisions.
Non-assignability
Read the Docs may assign or delegate these Terms of Service and/or our Privacy Policy, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in License grant to us. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Policy without our prior written consent, and any unauthorized assignment and delegation by you is void.
Section headings and summaries
Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding.
Severability, no waiver, and survival
If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties’ original intent. The remaining portions will remain in full force and effect. Any failure on the part of Read the Docs to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement.
Amendments; complete agreement
This Agreement may only be modified by a written amendment signed by an authorized representative of Read the Docs, or by the posting by Read the Docs of a revised version in accordance with Changes to these terms. These Terms of Service, together with our Privacy Policy, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and Read the Docs relating to the subject matter of these terms including any confidentiality or nondisclosure agreements.
Questions
Questions about the Terms of Service? Get in touch.
DMCA takedown policy
These are the guidelines that Read the Docs follows when handling DMCA takedown requests and takedown counter requests. If you are a copyright holder wishing to submit a takedown request, or an author that has been notified of a takedown request, please familiarize yourself with our process. You will be asked to confirm that you have reviewed information if you submit a request or counter request.
We aim to keep this entire process as transparent as possible. Our process is modeled after GitHub’s DMCA takedown process, which we appreciate for its focus on transparency and fairness. All requests and counter requests will be posted to this page below, in the Request Archive. These requests will be redacted to remove all identifying information, except for Read the Docs user and project names.
Takedown process
Here are the steps the Read the Docs will follow in the takedown request process:
- Copyright holder submits a request
This request, if valid, will be posted publicly on this page, down below. The author affected by the takedown request will be notified with a link to the takedown request.
For more information on submitting a takedown request, see: Submitting a Request
- Author is contacted
The author of the content in question will be asked to make changes to the content specified in the takedown request. The author will have 24 hours to make these changes. The copyright holder will be notified if and when this process begins
- Author acknowledges changes have been made
The author must notify Read the Docs that changes have been made within 24 hours of receiving a takedown request. If the author does not respond to this request, the default action will be to disable the Read the Docs project and remove any hosted versions
- Copyright holder review
If the author has made changes, the copyright holder will be notified of these changes. If the changes are sufficient, no further action is required, though copyright holders are welcome to submit a formal retraction. If the changes are not sufficient, the author’s changes can be rejected. If the takedown request requires alteration, a new request must be submitted. If Read the Docs does not receive a review response from the copyright holder within 2 weeks, the default action at this step is to assume the takedown request has been retracted.
- Content may be disabled
If the author does not respond to a request for changes, or if the copyright holder has rejected the author’s changes during the review process, the documentation project in question will be disabled.
- Author submits a counter request
If the author believes their content was disabled as a result of a mistake, a counter request may be submitted. It would be advised that authors seek legal council before continuing. If the submitted counter request is sufficiently detailed, this counter will also be added to this page. The copyright holder will be notified, with a link to this counter request.
For more information on submitting a counter request, see: Submitting a Counter
- Copyright holder may file legal action
At this point, if the copyright holder wishes to keep the offending content disabled, the copyright holder must file for legal action ordering the author refrain from infringing activities on Read the Docs. The copyright holder will have 2 weeks to supply Read the Docs with a copy of a valid legal complaint against the author. The default action here, if the copyright holder does not respond to this request, is to re-enable the author’s project.
Submitting a request
Your request must:
- Acknowledge this process
You must first acknowledge you are familiar with our DMCA takedown request process. If you do not acknowledge that you are familiar with our process, you will be instructed to review this information.
- Identify the infringing content
You should list URLs to each piece of infringing content. If you allege that the entire project is infringing on copyrights you hold, please specify the entire project as infringing.
- Identify infringement resolution
You will need to specify what a user must do in order to avoid having the rest of their content disabled. Be as specific as possible with this. Specify if this means adding attribution, identify specific files or content that should be removed, or if you allege the entire project is infringing, your should be specific as to why it is infringing.
- Include your contact information
Include your name, email, physical address, and phone number.
- Include your signature
This can be a physical or electronic signature.
Please complete this takedown request template
and send it to: support@readthedocs.com
Submitting a counter
Your counter request must:
- Acknowledge this process
You must first acknowledge you are familiar with our DMCA takedown request process. If you do not acknowledge that you are familiar with our process, you will be instructed to review this information.
- Identify the infringing content that was removed
Specify URLs in the original takedown request that you wish to challenge.
- Include your contact information
Include your name, email, physical address, and phone number.
- Include your signature
This can be a physical or electronic signature.
Requests can be submitted to: support@readthedocs.com
Request archive
For better transparency into copyright ownership and the DMCA takedown process, Read the Docs maintains this archive of previous DMCA takedown requests. This is modeled after GitHub’s DMCA archive.
The following DMCA takedown requests have been submitted:
2022-06-07
Note
The project maintainer was notified about this report and instructed to submit a counter if they believed this request was invalid. The user removed the project manually, and no further action was required.
- Are you the copyright owner or authorized to act on the copyright owner’s behalf?
Yes
- What work was allegedly infringed? If possible, please provide a URL:
- What files or project should be taken down? You should list URLs to each piece of infringing content. If you allege that the entire project is infringing on copyrights you hold, please specify the entire project as infringing:
- Is the work licensed under an open source license?
No
- What would be the best solution for the alleged infringement?
Complete Removal.
- Do you have the alleged infringer’s contact information? Yes. If so, please provide it:
[private]
- Type (or copy and paste) the following statement: “I have a good faith belief that use of the copyrighted materials described above on the infringing web pages is not authorized by the copyright owner, or its agent, or the law. I have taken fair use into consideration.”
I have a good faith belief that use of the copyrighted materials described above on the infringing web pages is not authorized by the copyright owner, or its agent, or the law. I have taken fair use into consideration.
- Type (or copy and paste) the following statement: “I swear, under penalty of perjury, that the information in this notification is accurate and that I am the copyright owner, or am authorized to act on behalf of the owner, of an exclusive right that is allegedly infringed.”
I swear, under penalty of perjury, that the information in this notification is accurate and that I am the copyright owner, or am authorized to act on behalf of the owner, of an exclusive right that is allegedly infringed.
- Please confirm that you have read our Takedown Policy: https://docs.readthedocs.io/en/latest/dmca/index.html
Yes
- So that we can get back to you, please provide either your telephone number or physical address:
[private]
- Please type your full legal name below to sign this request:
[private]
Data Processing Addendum (DPA)
Sub-processor list
- Effective:
April 16, 2021
- Last updated:
December 27, 2022
Read the Docs for Business uses services from the following sub-processors to provide documentation hosting services. This document supplements our Data Processing Addendum and may be separately updated on a periodic basis. A sub-processor is a third party data processor who has or potentially will have access to or will process personal data.
See also
Previous versions of this document, as well as the change history to this document, are available on GitHub
Infrastructure
- Amazon Web Services, Inc.
Cloud infrastructure provider.
Services
- Elasticsearch B.V.
Hosted ElasticSearch services for documentation search. Search indexes do not include user data.
- Sendgrid, Inc.
Provides email delivery to dashboard and admin users for site notifications and other generated messages. The body of notification emails can include user information, including email address.
- Google Analytics
Website analtyics for dashboard and documentation sites.
- Stripe Inc.
Subscription payment provider. Data collected can include user data necessary to process payment transactions, however this data is not processed directly by Read the Docs.
Monitoring
- New Relic
Application performance analytics. Data collected can include user data and visitor data used within application code.
- Sentry
Error analytics service used to log and track application errors. Error reports can include arguments passed to application code, which can include user and visitor data.
Support
- FrontApp, Inc.
Customer email support service. Can have access to user data, including user email and IP address, and stores communications related to user data.
Read the Docs can execute a DPA with any customer that receives data from the EU. You can complete this by reviewing and accepting the following pre-signed agreement:
Review the Read the Docs Data Processing Addendum
See also
- Read the Docs sub-processor list
An up-to-date list of the sub-processors we use for hosting services.
- Abandoned projects policy
Our policy of taking action on abandoned projects.
- Unofficial and unmaintained projects policy
Our policy of taking action on unofficial and unmaintained projects.
- Read the Docs Terms of Service
The terms of service for using Read the Docs Community and Read the Docs for Business. You may instead have a master services agreement for your subscription if you have a custom or enterprise contract.
- Privacy Policy
Our policy on collecting, storing, and protecting user and visitor data.
- Security policy
Our policies around application and infrastructure security.
- Security reports
How we respond to security incidents and how you report a security issue.
- Data Processing Addendum (DPA)
For GDPR and CCPA compliance, we provide a data processing addendum for Read the Docs for Business customers.
- DMCA takedown policy
Our process for taking down content based on DMCA requests and how to submit requests.
Advertising
Advertising is the single largest source of funding for Read the Docs. It allows us to:
Serve over 35 million pages of documentation per month
Serve over 40 TB of documentation per month
Host over 80,000 open source projects and support over 100,000 users
Pay a small team of dedicated full-time staff
Many advertising models involve tracking users around the internet, selling their data, and privacy intrusion in general. Instead of doing that, we built an Ethical Advertising model that respects user privacy.
We recognize that advertising is not for everyone. You may opt out of paid advertising although you will still see community ads. Gold members may also remove advertising from their projects for all visitors.
For businesses looking to remove advertising, please consider Read the Docs for Business.
EthicalAds
Read the Docs is a large, free web service. There is one proven business model to support this kind of site: Advertising. We are building the advertising model we want to exist, and we’re calling it EthicalAds.
EthicalAds respect users while providing value to advertisers. We don’t track you, sell your data, or anything else. We simply show ads to users, based on the content of the pages you look at. We also give 10% of our ad space to community projects, as our way of saying thanks to the open source community.
We talk a bit below about our worldview on advertising, if you want to know more.
Are you a marketer?
We built a whole business around privacy-focused advertising. If you’re trying to reach developers, we have a network of hand-approved sites (including Read the Docs) where your ads are shown.
Feedback
We’re a community, and we value your feedback. If you ever want to reach out about this effort, feel free to shoot us an email.
You can opt out of having paid ads on your projects, or seeing paid ads if you want. You will still see community ads, which we run for free that promote community projects.
Our worldview
We’re building the advertising model we want to exist:
We don’t track you
We don’t sell your data
We host everything ourselves, no third-party scripts or images
We’re doing newspaper advertising, on the internet. For a hundred years, newspapers put an ad on the page, some folks would see it, and advertisers would pay for this. This is our model.
So much ad tech has been built to track users. Following them across the web, from site to site, showing the same ads and gathering data about them. Then retailers sell your purchase data to try and attribute sales to advertising. Now there is an industry in doing fake ad clicks and other scams, which leads the ad industry to track you even more intrusively to know more about you. The current advertising industry is in a vicious downward spiral.
As developers, we understand the massive downsides of the current advertising industry. This includes malware, slow site performance, and huge databases of your personal data being sold to the highest bidder.
The trend in advertising is to have larger and larger ads. They should run before your content, they should take over the page, the bigger, weirder, or flashier the better.
We opt out
We don’t store personal information about you.
We only keep track of views and clicks.
We don’t build a profile of your personality to sell ads against.
We only show high quality ads from companies that are of interest to developers.
We are running a single, small, unobtrusive ad on documentation pages. The products should be interesting to you. The ads won’t flash or move.
We run the ads we want to have on our site, in a way that makes us feel good.
Additional details
We have additional documentation on the technical details of our advertising including our Do Not Track policy and our use of analytics.
We have an advertising FAQ written for advertisers.
We have gone into more detail about our views in our blog post about this topic.
Eric Holscher, one of our co-founders talks a bit more about funding open source this way on his blog.
After proving our ad model as a way to fund open source and building our ad serving infrastructure, we launched the EthicalAds network to help other projects be sustainable.
Join us
We’re building the advertising model we want to exist. We hope that others will join us in this mission:
If you’re a developer, talk to your marketing folks about using advertising that respects your privacy.
If you’re a marketer, vote with your dollars and support us in building the ad model we want to exist. Get more information on what we offer.
Community Ads
There are a large number of projects, conferences, and initiatives that we care about in the software and open source ecosystems. A large number of them operate like we did in the past, with almost no income. Our Community Ads program will highlight some of these projects.
There are a few qualifications for our Community Ads program:
Your organization and the linked site should not be trying to entice visitors to buy a product or service. We make an exception for conferences around open source projects if they are run not for profit and soliciting donations for open source projects.
A software project should have an OSI approved license.
We will not run a community ad for an organization tied to one of our paid advertisers.
We’ll show 10% of our ad inventory each month to support initiatives that we care about. Please complete an application to be considered for our Community Ads program.
Opting out
We have added multiple ways to opt out of the advertising on Read the Docs.
Gold members may remove advertising from their projects for all visitors.
You can opt out of seeing paid advertisements on documentation pages:
Go to the drop down user menu in the top right of the Read the Docs dashboard and clicking Settings (https://readthedocs.org/accounts/edit/).
On the Advertising tab, you can deselect See paid advertising.
You will still see community ads for open source projects and conferences.
Project owners can also opt out of paid advertisements for their projects. You can change these options:
Go to your project page (
/projects/<slug>/
)Go to Admin > Advertising
Change your advertising settings
If you are part of a company that uses Read the Docs to host documentation for a commercial product, we offer Read the Docs for Business that offers a completely ad-free experience, additional build resources, and other great features like CDN support and private documentation.
If you would like to completely remove advertising from your open source project, but our commercial plans don’t seem like the right fit, please get in touch to discuss alternatives to advertising.
Advertising details
Read the Docs largely funds our operations and development through advertising. However, we aren’t willing to compromise our values, document authors, or site visitors simply to make a bit more money. That’s why we created our ethical advertising initiative.
We get a lot of inquiries about our approach to advertising which range from questions about our practices to requests to partner. The goal of this document is to shed light on the advertising industry, exactly what we do for advertising, and how what we do is different. If you have questions or comments, send us an email or open an issue on GitHub.
Other ad networks’ targeting
Some ad networks build a database of user data in order to predict the types of ads that are likely to be clicked. In the advertising industry, this is called behavioral targeting. This can include data such as:
sites a user has visited
a user’s search history
ads, pages, or stories a user has clicked on in the past
demographic information such as age, gender, or income level
Typically, getting a user’s page visit history is accomplished by the use of trackers (sometimes called beacons or pixels). For example, if a site uses a tracker from an ad network and a user visits that site, the site can now target future advertising to that user – a known past visitor – with that network. This is called retargeting.
Other ad predictions are made by grouping similar users together based on user data using machine learning. Frequently this involves an advertiser uploading personal data on users (often past customers of the advertiser) to an ad network and telling the network to target similar users. The idea is that two users with similar demographic information and similar interests would like the same products. In ad tech, this is known as lookalike audiences or similar audiences.
Understandably, many people have concerns about these targeting techniques. The modern advertising industry has built enormous value by centralizing massive amounts of data on as many people as possible.
Our targeting details
Read the Docs doesn’t use the above techniques. Instead, we target based solely upon:
Details of the page where the advertisement is shown including:
The name, keywords, or programming language associated with the project being viewed
Content of the page (eg. H1, title, theme, etc.)
Whether the page is being viewed from a mobile device
General geography
We allow advertisers to target ads to a list of countries or to exclude countries from their advertising. For ads targeting the USA, we also support targeting by state or by metro area (DMA specifically).
We geolocate a user’s IP address to a country when a request is made.
Where ads are shown
We can place ads in:
the sidebar navigation
the footer of the page
on search result pages
a small footer fixed to the bottom of the viewport
on 404 pages (rare)
We show no more than one ad per page so you will never see both a sidebar ad and a footer ad on the same page.
Do Not Track Policy
Read the Docs supports Do Not Track (DNT) and respects users’ tracking preferences. For more details, see the Do Not Track section of our privacy policy.
Ad serving infrastructure
Our entire ad server is open source, so you can inspect how we’re doing things. We believe strongly in open source, and we practice what we preach.
Analytics
Analytics are a sensitive enough issue that they require their own section. In the spirit of full transparency, Read the Docs uses Google Analytics (GA). We go into a bit of detail on our use of GA in our Privacy Policy.
GA is a contentious issue inside Read the Docs and in our community. Some users are very sensitive and privacy conscious to usage of GA. Some authors want their own analytics on their docs to see the usage their docs get. The developers at Read the Docs understand that different users have different priorities and we try to respect the different viewpoints as much as possible while also accomplishing our own goals.
We have taken steps to address some of the privacy concerns surrounding GA. These steps apply both to analytics collected by Read the Docs and when authors enable analytics on their docs.
Users can opt-out of analytics by using the Do Not Track feature of their browser.
Read the Docs instructs Google to anonymize IP addresses sent to them.
The cookie set by GA is a session (non-persistent) cookie rather than the default 2 years.
Project maintainers can completely disable analytics on their own projects. Follow the steps in Disabling Google Analytics on your project.
Why we use analytics
Advertisers ask us questions that are easily answered with an analytics solution like “how many users do you have in Switzerland browsing Python docs?”. We need to be able to easily get this data. We also use data from GA for some development decisions such as what browsers to support (or not) or how much usage a particular page or feature gets.
Alternatives
We are always exploring our options with respect to analytics. There are alternatives but none of them are without downsides. Some alternatives are:
Run a different cloud analytics solution from a provider other than Google (eg. Parse.ly, Matomo Cloud, Adobe Analytics). We priced a couple of these out based on our load and they are very expensive. They also just substitute one problem of data sharing with another.
Send data to GA (or another cloud analytics provider) on the server side and strip or anonymize personal data such as IPs before sending them. This would be a complex solution and involve additional infrastructure, but it would have many advantages. It would result in a loss of data on “sessions” and new vs. returning visitors which are of limited value to us.
Run a local JavaScript based analytics solution (eg. Matomo community). This involves additional infrastructure that needs to be always up. Frequently there are very large databases associated with this. Many of these solutions aren’t built to handle Read the Docs’ load.
Run a local analytics solution based on web server log parsing. This has the same infrastructure problems as above while also not capturing all the data we want (without additional engineering) like the programming language of the docs being shown or whether the docs are built with Sphinx or something else.
Ad blocking
Ad blockers fulfill a legitimate need to mitigate the significant downsides of advertising from tracking across the internet, security implications of third-party code, and impacting the UX and performance of sites.
At Read the Docs, we specifically didn’t want those things. That’s why we built the our Ethical Ad initiative with only relevant, unobtrusive ads that respect your privacy and don’t do creepy behavioral targeting.
Advertising is the single largest source of funding for Read the Docs. To keep our operations sustainable, we ask that you either allow our EthicalAds or go ad-free.
Allowing EthicalAds
If you use AdBlock or AdBlockPlus and you allow acceptable ads or privacy-friendly acceptable ads then you’re all set. Advertising on Read the Docs complies with both of these programs.
If you prefer not to allow acceptable ads but would consider allowing ads that benefit open source, please consider subscribing to either the wider Open Source Ads list or simply the Read the Docs Ads list.
Note
Because of the way Read the Docs is structured where docs are hosted on many different domains, adding a normal ad block exception will only allow that single domain not Read the Docs as a whole.
Going ad-free
Gold members may completely remove advertising for all visitors to their projects. Thank you for supporting Read the Docs.
Note
Previously, Gold members or Supporters were provided an ad-free reading experience across all projects on Read the Docs while logged-in. However, the cross-site cookies needed to make that work are no longer supported by major browsers outside of Chrome, and this feature will soon disappear entirely.
Statistics and data
It can be really hard to find good data on ad blocking. In the spirit of transparency, here is the data we have on ad blocking at Read the Docs.
Of those, a little over 50% allow acceptable ads
Read the Docs users running ad blockers click on ads at about the same rate as those not running an ad blocker.
Comparing with our server logs, roughly 28% of our hits did not register a Google Analytics (GA) pageview due to an ad blocker, privacy plugin, disabling JavaScript, or another reason.
Of users who do not block GA, about 6% opt out of analytics on Read the Docs by enabling Do Not Track.
Customizing advertising
Warning
This document details features that are a work in progress. To discuss this document, please get in touch in the issue tracker.
In addition to allowing users and documentation authors to opt out of advertising, we allow some additional controls for documentation authors to control the positioning and styling of advertising. This can improve the performance of advertising or make sure the ad is in a place where it fits well with the documentation.
Controlling the placement of an ad
It is possible for a documentation author to instruct Read the Docs to position advertising in a specific location. This is done by adding a specific element to the generated body. The ad will be inserted into this container wherever this element is in the document body.
<div id="ethical-ad-placement"></div>
In Sphinx
In Sphinx, this is typically done by
adding a new template (under templates_path)
for inclusion in the HTML sidebar in your conf.py
.
## In conf.py
html_sidebars = {
"**": [
"localtoc.html",
"ethicalads.html", # Put the ad below the navigation but above previous/next
"relations.html",
"sourcelink.html",
"searchbox.html",
]
}
<!-- In _templates/ethicalads.html -->
<div id="ethical-ad-placement"></div>
The story of Read the Docs
Documenting projects is hard, hosting them shouldn’t be. Read the Docs was created to make hosting documentation simple.
Read the Docs was started with a couple main goals in mind. The first goal was to encourage people to write documentation, by removing the barrier of entry to hosting. The other goal was to create a central platform for people to find documentation. Having a shared platform for all documentation allows for innovation at the platform level, allowing work to be done once and benefit everyone.
Documentation matters, but its often overlooked. We think that we can help a documentation culture flourish. Great projects, such as Django and SQLAlchemy, and projects from companies like Mozilla, are already using Read the Docs to serve their documentation to the world.
The site has grown quite a bit over the past year. Our look back at 2013 shows some numbers that show our progress. The job isn’t anywhere near done yet, but it’s a great honor to be able to have such an impact already.
We plan to keep building a great experience for people hosting their docs with us, and for users of the documentation that we host.
Sponsors of Read the Docs
Running Read the Docs isn’t free, and the site wouldn’t be where it is today without generous support of our sponsors. Below is a list of all the folks who have helped the site financially, in order of the date they first started supporting us.
Current sponsors
AWS - They cover all of our hosting expenses every month. This is a pretty large sum of money, averaging around $5,000/mo.
Cloudflare - Cloudflare is providing us with an enterprise plan of their SSL for SaaS Providers product that enables us to provide SSL certificates for custom domains.
Chan Zuckerberg Initiative - Through their “Essential Open Source Software for Science” programme, they fund our ongoing efforts to improve scientific documentation and make Read the Docs a better service for scientific projects.
You? (Email us at hello@readthedocs.org for more info)
Past sponsors
Sponsorship information
As part of increasing sustainability, Read the Docs is testing out promoting sponsors on documentation pages. We have more information about this in our blog post about this effort.
Sponsor us
Contact us at rev@readthedocs.org for more information on sponsoring Read the Docs.
Documentation in scientific and academic publishing
On this page, we explore some of the many tools and practices that software documentation and academic writing share. If you are working within the field of science or academia, this page can be used as an introduction.
Documentation and technical writing are broad fields. Their tools and practices have grown relevant to most scientific activities. This includes building publications, books, educational resources, interactive data science, resources for data journalism and full-scale websites for research projects and courses.
Here’s a brief overview of some features that people in science and academic writing love about Read the Docs:
🪄 Easy to use
Documentation code doesn’t have to be written by a programmer. In fact, documentation coding languages are designed and developed so you don’t have to be a programmer, and there are many writing aids that makes it easy to abstract from code and focus on content.
Getting started is also made easy:
All new to this? Take the official Jupyter Book Tutorial
Curious for practical code? See Example projects
Familiar with Sphinx? See How to use Jupyter notebooks in Sphinx
🔋 Batteries included: Graphs, computations, formulas, maps, diagrams and more
Take full advantage of getting all the richness of Jupyter Notebook combined with Sphinx and the giant ecosystem of extensions for both of these.
Here are some examples:
Use symbols familiar from math and physics, build advanced proofs. See also: sphinx-proof
Present results with plots, graphs, images and let users interact directly with your datasets and algorithms. See also: Matplotlib, Interactive Data Visualizations
Graphs, tables etc. are computed when the latest version of your project is built and published as a stand-alone website. All code examples on your website are validated each time you build.
📚 Bibliographies and external links
Maintain bibliography databases directly as code and have external links automatically verified.
Using extensions for Sphinx such as the popular sphinxcontrib-bibtex extension, you can maintain your bibliography with Sphinx directly or refer to entries .bib
files, as well as generating entire Bibliography sections from those files.
📜 Modern themes and classic PDF outputs

Use the latest state-of-the-art themes for web and have PDFs and e-book formats automatically generated.
New themes are improving every day, and when you write documentation based on Jupyter Book and Sphinx, you will separate your contents and semantics from your presentation logic. This way, you can keep up with the latest theme updates or try new themes.
Another example of the benefits from separating content and presentation logic: Your documentation also transforms into printable books and eBooks.
📐 Widgets, widgets and more widgets
Design your science project’s layout and components with widgets from a rich eco-system of open-source extensions built for many purposes. Special widgets help users display and interact with graphs, maps and more. Several extensions are built and invented by the science community.
⚙️ Automatic builds
Build and publish your project for every change made through Git (GitHub, GitLab, Bitbucket etc). Preview changes via pull requests. Receive notifications when something is wrong. How does this work? Have a look at this video:
💬 Collaboration and community

Science and academia have a big kinship with software developers: We ❤️ community. Our solutions and projects become better when we foster inclusivity and active participation. Read the Docs features easy access for readers to suggest changes via your git platform (GitHub, GitLab, Bitbucket etc.). But not just any unqualified feedback. Instead, the code and all the tools are available for your community to forge qualified contributions.
Your readers can become your co-authors!
Discuss changes via pull request and track all changes in your project’s version history.
Using git does not mean that anyone can go and change your code and your published project. The full ownership and permission handling remains in your hands. Project and organization owners on your git platform govern what is released and who has access to approve and build changes.
🔎 Full search and analytics
Read the Docs comes with a number of features bundled in that you would have to configure if you were hosting documentation elsewhere.
- Super-fast text search
Your documentation is automatically indexed and gets its own search function.
- Traffic statistics
Have full access to your traffic data and have quick access to see which of your pages are most popular.
- Search analytics
What are people searching for and do they get hits? From each search query in your documentation, we collect a neat little statistic that can help to improve the discoverability and relevance of your documentation.
- SEO - Don’t reinvent search engine optimization
Use built-in SEO best-practices from Sphinx, its themes and Read the Docs hosting. This can give you a good ranking on search engines as a direct outcome of simply writing and publishing your documentation project.
🌱 Grow your own solutions
The eco-system is open source and makes it accessible for anyone with Python skills to build their own extensions.
We want science communities to use Read the Docs and to be part of the documentation community 💞
Getting started: Jupyter Book
Jupyter Book on Read the Docs brings you the rich experience of computated Jupyter documents built together with a modern documentation tool. The results are beautiful and automatically deployed websites, built with Sphinx and Executable Book + all the extensions available in this ecosystem.
Here are some popular activities that are well-supported by Jupyter Book:
Publications and books
Course and research websites
Interactive classroom activities
Data science software documentation
Visit the gallery of solutions built with Jupyter Book »
Ready to get started?
All new to this? Take the official Jupyter Book Tutorial »
Curious for practical code? See the list of example projects »
Familiar with Sphinx? Read How to use Jupyter notebooks in Sphinx »
Examples and users
Read the Docs community for science is already big and keeps growing. The Jupyter Project itself and the many sub-projects of Jupyter are built and published with Read the Docs.
Read the Docs open source philosophy
Read the Docs is open source software. We have licensed the code base as MIT, which provides almost no restrictions on the use of the code.
However, as a project there are things that we care about more than others. We built Read the Docs to support documentation in the open source community. The code is open for people to contribute to, so that they may build features into https://readthedocs.org that they want. We also believe sharing the code openly is a valuable learning tool, especially for demonstrating how to collaborate and maintain an enormous website.
Official support
The time of the core developers of Read the Docs is limited. We provide official support for the following things:
Local development on the Python code base
Usage of https://readthedocs.org for open source projects
Bug fixes in the code base, as it applies to running it on https://readthedocs.org
Unsupported
There are use cases that we don’t support, because it doesn’t further our goal of promoting documentation in the open source community.
We do not support:
Specific usage of Sphinx and Mkdocs, that don’t affect our hosting
Custom installations of Read the Docs at your company
Installation of Read the Docs on other platforms
Any installation issues outside of the Read the Docs Python Code
Rationale
Read the Docs was founded to improve documentation in the open source community. We fully recognize and allow the code to be used for internal installs at companies, but we will not spend our time supporting it. Our time is limited, and we want to spend it on the mission that we set out to originally support.
If you feel strongly about installing Read the Docs internal to a company, we will happily link to third party resources on this topic. Please open an issue with a proposal if you want to take on this task.
Read the Docs team
readthedocs.org is the largest open source documentation hosting service. Today we:
Serve over 55 million pages of documentation a month
Serve over 40 TB of documentation a month
Host over 80,000 open source projects and support over 100,000 users
Read the Docs is provided as a free service to the open source community, and we hope to maintain a reliable and stable hosting platform for years to come.
See also
- Our website: Who we are
More information about the staff and contributors of Read the Docs.
Teams
The Backend Team folks develop the Django code that powers the backend of the project.
The members of the Frontend Team care about UX, CSS, HTML, and JavaScript, and they maintain the project UI as well as the Sphinx theme.
As part of operating the site, members of the Operations Team maintain a 24/7 on-call rotation. This means that folks have to be available and have their phone in service.
The members of the Advocacy Team spread the word about all the work we do, and seek to understand the users priorities and feedback.
The Support Team helps our thousands of users using the service, addressing tasks like resetting passwords, enable experimental features, or troubleshooting build errors.
Note
Please don’t email us personally for support on Read the Docs. You can use our support form for any issues you may have.
Site support
Read the Docs offers support for projects on our Read the Docs for Business and Read the Docs Community platforms. We’re happy to assist with any questions or problems you have using either of our platforms.
Note
Read the Docs does not offer support for questions or problems with documentation tools or content. If you have a question or problem using a particular documentation tool, you should refer to external resources for help instead.
Some examples of requests that we support are:
“How do I transfer ownership of a Read the Docs project to another maintainer?”
“Why are my project builds being cancelled automatically?”
“How do I manage my subscription?”
You might also find the answers you are looking for in our documentation guides. These provide step-by-step solutions to common user requests.
Please fill out the form at https://readthedocs.com/support/.
Our team responds to support requests within 2 business days or earlier for most plans. Faster support response times and support SLAs are available with plan upgrades.
Please fill out the form at https://readthedocs.org/support/, and we will reply as soon as possible.
External resources
If you have questions about how to use a documentation tool or authoring content for your project, or have an issue that isn’t related to a bug with Read the Docs, Stack Overflow is the best place for your question.
Examples of good questions for Stack Overflow are:
“What is the best way to structure the table of contents across a project?”
“How do I structure translations inside of my project for easiest contribution from users?”
“How do I use Sphinx to use SVG images in HTML output but PNG in PDF output?”
Tip
Tag questions with read-the-docs
so other folks can find them easily.
Bug reports
If you have an issue with the actual functioning of Read the Docs, you can file bug reports on our GitHub issue tracker. You can also contribute changes and fixes to Read the Docs, as the code is open source.
Glossary
This page includes a number of terms that we use in our documentation, so that you have a reference for how we’re using them.
- CI/CD
CI/CD is a common way to write Continuous Integration and Continuous Deployment. In some scenarios, they exist as two separate platforms. Read the Docs is a combined CI/CD platform made for documentation.
- dashboard
The “admin” site where Read the Docs projects are managed and configured. This varies for our two properties:
Read the Docs for Business: https://readthedocs.com/dashboard/.
Read the Docs Community: https://readthedocs.org/dashboard/.
- default version
Projects have a default version, usually the latest stable version of a project. The default version is the URL that is redirected to when a users loads the
/
URL for your project.- discoverability
A documentation page is said to be discoverable when a user that needs it can find it through various methods: Navigation, search, and links from other pages are the most typical ways of making content discoverable.
- Docs as Code
A term used to describe the workflow of keeping documentation in a Git repository, along with source code. Popular in the open source software movement, and used by many technology companies.
Menu displayed on the documentation, readily accessible for readers, containing the list active versions, links to static downloads, and other useful links. Read more in our Flyout menu page.
- GitOps
Denotes the use of code maintained in Git to automate building, testing, and deployment of infrastructure. In terms of documentation, GitOps is applicable for Read the Docs, as the configuration for building documentation is stored in
.readthedocs.yaml
, and rules for publication of documentation can be automated. Similar to Docs as Code.- maintainer
A maintainer is a special role that only exists on Read the Docs Community. The creator of a project on Read the Docs Community can invite other collaborators as maintainers with full ownership rights.
The maintainer role does not exist on Read the Docs for Business, which instead provides Organizations.
Please see Git provider integrations for more information.
- pinning
To pin a requirement means to explicitly specify which version should be used. Pinning software requirements is the most important technique to make a project reproducible.
When documentation builds, software dependencies are installed in their latest versions permitted by the pinning specification. Since software packages are frequently released, we are usually trying to avoid incompatibilities in a new release from suddenly breaking a documentation build.
Examples of Python dependencies:
# Exact pinning: Only allow Sphinx 5.3.0 sphinx==5.3.0 # Loose pinning: Lower and upper bounds result in the latest 5.3.x release sphinx>=5.3,<5.4 # Very loose pinning: Lower and upper bounds result in the latest 5.x release sphinx>=5,<6
Read the Docs recommends using exact pinning.
- pre-defined build jobs
Commands executed by Read the Docs when performing the build process. They cannot be overwritten by the user.
- project home
Page where you can access all the features of Read the Docs, from having an overview to browsing the latest builds or administering your project.
- project page
Another name for project home.
- reproducible
A documentation project is said to be reproducible when its sources build correctly on Read the Docs over a periode of many years. You can also think of being reproducible as being robust or resillient.
Being “reproducible” is an important positive quality goal of documentation.
When builds are not reproducible and break due to external factors, they need frequent troubleshooting and manual fixing.
The most common external factor is that new versions of software dependencies are released.
- root URL
Home URL of your documentation without the
/<lang>
and/<version>
segments. For projects without custom domains, the one ending in.readthedocs.io/
(for example,https://docs.readthedocs.io
as opposed tohttps://docs.readthedocs.io/en/latest
).- slug
A unique identifier for a project or version. This value comes from the project or version name, which is reduced to lowercase letters, numbers, and hyphens. You can retrieve your project or version slugs from our API.
- static website
A static site or static website is a collection of HTML files, images, CSS and JavaScript that are served statically, as opposed to dynamic websites that generate a unique response for each request, using databases and user sessions.
Static websites are highly portable, as they do not depend on the webserver. They can also be viewed offline.
Documentation projects served on Read the Docs are static websites.
Tools to manage and generate static websites are commonly known as static site generators and there is a big overlap with documentation tools. Some static site generators are also documentation tools, and some documentation tools are also used to generate normal websites.
For instance, Sphinx is made for documentation but also used for blogging.
- subproject
Project A can be configured such that when requesting a URL
/projects/<subproject-slug>
, the root of project B is returned. In this case, project B is the subproject. Read more in Subprojects.- user-defined build jobs
Commands defined by the user that Read the Docs will execute when performing the build process.
- webhook
A webhook is a special URL that can be called from another service, usually with a secret token. It is commonly used to start a build or a deployment or to send a status update.
There are two important types of webhooks for Read the Docs:
Git providers have webhooks which are special URLs that Read the Docs can call in order to notify about documentation builds.
Read the Docs has a unique webhook for each project that the Git provider calls when changes happen in Git.
See also: How to manually configure a Git repository integration and Build failure notifications
Read the Docs simplifies software documentation by building, versioning, and hosting of your docs, automatically. Treating documentation like code keeps your team in the same tools, and your documentation up to date.
- Up to date documentation
Whenever you push code to Git, Read the Docs will automatically build your docs so your code and documentation are always up-to-date. Get started with our tutorial.
- Documentation for every version
Read the Docs can host multiple versions of your docs. Keep your 1.0 and 2.0 documentation online, pulled directly from Git. Start hosting all your versions.
- Open source and user focused
Our company is bootstrapped and 100% user-focused, so our product gets better for our users instead of our investors. Read the Docs Community hosts documentation for over 100,000 large and small open source projects. Read the Docs for Business supports hundreds of organizations with product and internal documentation. Learn more about our two platforms.
First time here?
We have a few places for you to get started:
- Read the Docs tutorial
Take the first practical steps with Read the Docs.
- Choosing between our two platforms
Learn about the differences between Read the Docs Community and Read the Docs for Business.
- Example projects
Start your journey with an example project to hit the ground running.
Project setup and configuration
Start with the basics of setting up your project:
- Configuration file overview
Learn how to configure your project with a
.readthedocs.yaml
file.- How to create reproducible builds
Learn how to make your builds reproducible.
Build process
Build your documentation with ease:
- Build process overview
Overview of how documentation builds happen.
- Pull request previews
Setup pull request builds and enjoy previews of each commit.
Hosting documentation
Learn more about our hosting features:
- Versions
Host multiple versions of your documentation.
- Subprojects
Host multiple projects under a single domain.
- Localization and Internationalization
Host your documentation in multiple languages.
- URL versioning schemes
Learn about different versioning schemes.
- Custom domains
Host your documentation on your own domain.
Maintaining projects
Keep your documentation up to date:
- Redirects
Redirect your old URLs to new ones.
- Analytics for search and traffic
Learn more about how users are interacting with your documentation.
- Security logs
Keep track of security events in your project.
Business features
Features for organizations and businesses:
- Business hosting
Learn more about our commercial features.
- Organizations
Learn how to manage your organization on Read the Docs.
- Single Sign-On (SSO)
Learn how to use single sign-on with Read the Docs.
How-to guides
Step-by-step guides for common tasks:
- How to configure pull request builds
Setup pull request builds and enjoy previews of each commit.
- How to use traffic analytics
Learn more about how users are interacting with your documentation.
- How to use cross-references with Sphinx
Learn how to use cross-references in a Sphinx project.
- All how-to guides
Browser the entire catalog for many more how-to guides.
Reference
More detailed information about Read the Docs:
- Public REST API
Automate your documentation with our API and save yourself some work.
- Changelog
See what’s new in Read the Docs.
- About Read the Docs
Learn more about Read the Docs and our company.