Compare commits
1 Commits
v1.8.0-bet
...
jw-1.0.1
Author | SHA1 | Date | |
---|---|---|---|
![]() |
c6a51a1cdc |
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"qpdf": {
|
||||
"version": "10.6.3"
|
||||
},
|
||||
"jbig2enc": {
|
||||
"version": "0.29",
|
||||
"git_tag": "0.29"
|
||||
}
|
||||
}
|
@@ -1,21 +0,0 @@
|
||||
**/__pycache__
|
||||
/src-ui/.vscode
|
||||
/src-ui/node_modules
|
||||
/src-ui/dist
|
||||
.git
|
||||
/export
|
||||
/consume
|
||||
/media
|
||||
/data
|
||||
/docs
|
||||
.pytest_cache
|
||||
/dist
|
||||
/scripts
|
||||
/resources
|
||||
**/tests
|
||||
**/*.spec.ts
|
||||
**/htmlcov
|
||||
/src/.pytest_cache
|
||||
.idea
|
||||
.venv/
|
||||
.vscode/
|
@@ -18,20 +18,8 @@ max_line_length = off
|
||||
indent_size = 4
|
||||
indent_style = space
|
||||
|
||||
[*.{yml,yaml}]
|
||||
indent_style = space
|
||||
|
||||
[*.rst]
|
||||
indent_style = space
|
||||
|
||||
[*.md]
|
||||
indent_style = space
|
||||
|
||||
# Tests don't get a line width restriction. It's still a good idea to follow
|
||||
# the 79 character rule, but in the interests of clarity, tests often need to
|
||||
# violate it.
|
||||
[**/test_*.py]
|
||||
max_line_length = off
|
||||
|
||||
[Dockerfile*]
|
||||
indent_style = space
|
||||
|
2
.env
@@ -1,2 +0,0 @@
|
||||
COMPOSE_PROJECT_NAME=paperless
|
||||
export PROMPT="(pipenv-projectname)$P$G"
|
1
.gitattributes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
THANKS.md merge=union
|
86
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -1,86 +0,0 @@
|
||||
name: Bug report
|
||||
description: Something is not working
|
||||
title: "[BUG] Concise description of the issue"
|
||||
labels: ["bug", "unconfirmed"]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Have a question? 👉 [Start a new discussion](https://github.com/paperless-ngx/paperless-ngx/discussions/new) or [ask in chat](https://matrix.to/#/#paperless:adnidor.de).
|
||||
|
||||
Before opening an issue, please double check:
|
||||
|
||||
- [The troubleshooting documentation](https://paperless-ngx.readthedocs.io/en/latest/troubleshooting.html).
|
||||
- [The installation instructions](https://paperless-ngx.readthedocs.io/en/latest/setup.html#installation).
|
||||
- [Existing issues and discussions](https://github.com/paperless-ngx/paperless-ngx/search?q=&type=issues).
|
||||
|
||||
If you encounter issues while installing or configuring Paperless-ngx, please post in the ["Support" section of the discussions](https://github.com/paperless-ngx/paperless-ngx/discussions/new?category=support).
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: A clear and concise description of what the bug is. If applicable, add screenshots to help explain your problem.
|
||||
placeholder: |
|
||||
Currently Paperless does not work when...
|
||||
|
||||
[Screenshot if applicable]
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
attributes:
|
||||
label: Steps to reproduce
|
||||
description: Steps to reproduce the behavior.
|
||||
placeholder: |
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. See error
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Webserver logs
|
||||
description: If available, post any logs from the web server related to your issue.
|
||||
render: bash
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: Paperless-ngx version
|
||||
placeholder: e.g. 1.6.0
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: host-os
|
||||
attributes:
|
||||
label: Host OS
|
||||
description: Host OS of the machine running paperless-ngx. Please add the architecture (uname -m) if applicable.
|
||||
placeholder: e.g. Archlinux / Ubuntu 20.04 / Raspberry Pi `arm64`
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: install-method
|
||||
attributes:
|
||||
label: Installation method
|
||||
options:
|
||||
- Docker
|
||||
- Bare metal
|
||||
- Other (please describe above)
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: browser
|
||||
attributes:
|
||||
label: Browser
|
||||
description: Which browser you are using, if relevant.
|
||||
placeholder: e.g. Chrome, Safari
|
||||
- type: input
|
||||
id: config-changes
|
||||
attributes:
|
||||
label: Configuration changes
|
||||
description: Any configuration changes you made in `docker-compose.yml`, `docker-compose.env` or `paperless.conf`.
|
||||
- type: input
|
||||
id: other
|
||||
attributes:
|
||||
label: Other
|
||||
description: Any other relevant details.
|
11
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,11 +0,0 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: 🤔 Questions and Help
|
||||
url: https://github.com/paperless-ngx/paperless-ngx/discussions
|
||||
about: This issue tracker is not for support questions. Please refer to our Discussions.
|
||||
- name: 💬 Chat
|
||||
url: https://matrix.to/#/#paperless:adnidor.de
|
||||
about: Want to discuss Paperless-ngx with others? Check out our chat.
|
||||
- name: 🚀 Feature Request
|
||||
url: https://github.com/paperless-ngx/paperless-ngx/discussions/new?category=feature-requests
|
||||
about: Remember to search for existing feature requests and "up-vote" any you like
|
32
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,32 +0,0 @@
|
||||
<!--
|
||||
Note: All PRs with code changes should be targeted to the `dev` branch, pure documentation changes can target `main`
|
||||
-->
|
||||
|
||||
## Proposed change
|
||||
|
||||
<!--
|
||||
Please include a summary of the change and which issue is fixed (if any) and any relevant motivation / context. List any dependencies that are required for this change. If appropriate, please include an explanation of how your proposed change can be tested. Screenshots and / or videos can also be helpful if appropriate.
|
||||
-->
|
||||
|
||||
Fixes # (issue)
|
||||
|
||||
## Type of change
|
||||
|
||||
<!--
|
||||
What type of change does your PR introduce to Paperless-ngx?
|
||||
NOTE: Please check only one box!
|
||||
-->
|
||||
|
||||
- [ ] Bug fix (non-breaking change which fixes an issue)
|
||||
- [ ] New feature (non-breaking change which adds functionality)
|
||||
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
|
||||
- [ ] Other (please explain)
|
||||
|
||||
## Checklist:
|
||||
|
||||
- [ ] I have read & agree with the [contributing guidelines](https://github.com/paperless-ngx/paperless-ngx/blob/main/CONTRIBUTING.md).
|
||||
- [ ] If applicable, I have tested my code for new features & regressions on both mobile & desktop devices, using the latest version of major browsers.
|
||||
- [ ] If applicable, I have checked that all tests pass, see [documentation](https://paperless-ngx.readthedocs.io/en/latest/extending.html#back-end-development).
|
||||
- [ ] I have run all `pre-commit` hooks, see [documentation](https://paperless-ngx.readthedocs.io/en/latest/extending.html#code-formatting-with-pre-commit-hooks).
|
||||
- [ ] I have made corresponding changes to the documentation as needed.
|
||||
- [ ] I have checked my modifications for any breaking changes.
|
48
.github/dependabot.yml
vendored
@@ -1,48 +0,0 @@
|
||||
# https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates#package-ecosystem
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
|
||||
# Enable version updates for npm
|
||||
- package-ecosystem: "npm"
|
||||
target-branch: "dev"
|
||||
# Look for `package.json` and `lock` files in the `/src-ui` directory
|
||||
directory: "/src-ui"
|
||||
# Check the npm registry for updates every month
|
||||
schedule:
|
||||
interval: "monthly"
|
||||
labels:
|
||||
- "frontend"
|
||||
- "dependencies"
|
||||
# Add reviewers
|
||||
reviewers:
|
||||
- "paperless-ngx/frontend"
|
||||
|
||||
# Enable version updates for Python
|
||||
- package-ecosystem: "pip"
|
||||
target-branch: "dev"
|
||||
# Look for a `Pipfile` in the `root` directory
|
||||
directory: "/"
|
||||
# Check for updates once a week
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
labels:
|
||||
- "backend"
|
||||
- "dependencies"
|
||||
# Add reviewers
|
||||
reviewers:
|
||||
- "paperless-ngx/backend"
|
||||
|
||||
# Enable updates for Github Actions
|
||||
- package-ecosystem: "github-actions"
|
||||
target-branch: "dev"
|
||||
directory: "/"
|
||||
schedule:
|
||||
# Check for updates to GitHub Actions every month
|
||||
interval: "monthly"
|
||||
labels:
|
||||
- "ci-cd"
|
||||
- "dependencies"
|
||||
# Add reviewers
|
||||
reviewers:
|
||||
- "paperless-ngx/ci-cd"
|
37
.github/release-drafter.yml
vendored
@@ -1,37 +0,0 @@
|
||||
categories:
|
||||
- title: 'Breaking Changes'
|
||||
labels:
|
||||
- 'breaking-change'
|
||||
- title: 'Features'
|
||||
labels:
|
||||
- 'enhancement'
|
||||
- title: 'Bug Fixes'
|
||||
labels:
|
||||
- 'bug'
|
||||
- title: 'Documentation'
|
||||
label: 'documentation'
|
||||
- title: 'Maintenance'
|
||||
labels:
|
||||
- 'chore'
|
||||
- 'deployment'
|
||||
- 'translation'
|
||||
- title: 'Dependencies'
|
||||
collapse-after: 3
|
||||
label: 'dependencies'
|
||||
include-labels:
|
||||
- 'enhancement'
|
||||
- 'bug'
|
||||
- 'chore'
|
||||
- 'deployment'
|
||||
- 'translation'
|
||||
- 'dependencies'
|
||||
replacers: # Changes "Feature: Update checker" to "Update checker"
|
||||
- search: '/Feature:|Feat:|\[feature\]/gi'
|
||||
replace: ''
|
||||
category-template: '### $TITLE'
|
||||
change-template: '- $TITLE [@$AUTHOR](https://github.com/$AUTHOR) ([#$NUMBER]($URL))'
|
||||
change-title-escapes: '\<*_&#@'
|
||||
template: |
|
||||
## paperless-ngx $RESOLVED_VERSION
|
||||
|
||||
$CHANGES
|
254
.github/scripts/cleanup-tags.py
vendored
@@ -1,254 +0,0 @@
|
||||
import logging
|
||||
import os
|
||||
from argparse import ArgumentParser
|
||||
from typing import Final
|
||||
from typing import List
|
||||
from urllib.parse import quote
|
||||
|
||||
import requests
|
||||
from common import get_log_level
|
||||
|
||||
logger = logging.getLogger("cleanup-tags")
|
||||
|
||||
|
||||
class GithubContainerRegistry:
|
||||
def __init__(
|
||||
self,
|
||||
session: requests.Session,
|
||||
token: str,
|
||||
owner_or_org: str,
|
||||
):
|
||||
self._session: requests.Session = session
|
||||
self._token = token
|
||||
self._owner_or_org = owner_or_org
|
||||
# https://docs.github.com/en/rest/branches/branches
|
||||
self._BRANCHES_ENDPOINT = "https://api.github.com/repos/{OWNER}/{REPO}/branches"
|
||||
if self._owner_or_org == "paperless-ngx":
|
||||
# https://docs.github.com/en/rest/packages#get-all-package-versions-for-a-package-owned-by-an-organization
|
||||
self._PACKAGES_VERSIONS_ENDPOINT = "https://api.github.com/orgs/{ORG}/packages/{PACKAGE_TYPE}/{PACKAGE_NAME}/versions"
|
||||
# https://docs.github.com/en/rest/packages#delete-package-version-for-an-organization
|
||||
self._PACKAGE_VERSION_DELETE_ENDPOINT = "https://api.github.com/orgs/{ORG}/packages/{PACKAGE_TYPE}/{PACKAGE_NAME}/versions/{PACKAGE_VERSION_ID}"
|
||||
else:
|
||||
# https://docs.github.com/en/rest/packages#get-all-package-versions-for-a-package-owned-by-the-authenticated-user
|
||||
self._PACKAGES_VERSIONS_ENDPOINT = "https://api.github.com/user/packages/{PACKAGE_TYPE}/{PACKAGE_NAME}/versions"
|
||||
# https://docs.github.com/en/rest/packages#delete-a-package-version-for-the-authenticated-user
|
||||
self._PACKAGE_VERSION_DELETE_ENDPOINT = "https://api.github.com/user/packages/{PACKAGE_TYPE}/{PACKAGE_NAME}/versions/{PACKAGE_VERSION_ID}"
|
||||
|
||||
def __enter__(self):
|
||||
self._session.headers.update(
|
||||
{
|
||||
"Accept": "application/vnd.github.v3+json",
|
||||
"Authorization": f"token {self._token}",
|
||||
},
|
||||
)
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
if "Accept" in self._session.headers:
|
||||
del self._session.headers["Accept"]
|
||||
if "Authorization" in self._session.headers:
|
||||
del self._session.headers["Authorization"]
|
||||
|
||||
def _read_all_pages(self, endpoint):
|
||||
internal_data = []
|
||||
|
||||
while True:
|
||||
resp = self._session.get(endpoint)
|
||||
if resp.status_code == 200:
|
||||
internal_data += resp.json()
|
||||
if "next" in resp.links:
|
||||
endpoint = resp.links["next"]["url"]
|
||||
else:
|
||||
logger.debug("Exiting pagination loop")
|
||||
break
|
||||
else:
|
||||
logger.warning(f"Request to {endpoint} return HTTP {resp.status_code}")
|
||||
break
|
||||
|
||||
return internal_data
|
||||
|
||||
def get_branches(self, repo: str):
|
||||
endpoint = self._BRANCHES_ENDPOINT.format(OWNER=self._owner_or_org, REPO=repo)
|
||||
internal_data = self._read_all_pages(endpoint)
|
||||
return internal_data
|
||||
|
||||
def filter_branches_by_name_pattern(self, branch_data, pattern: str):
|
||||
matches = {}
|
||||
|
||||
for branch in branch_data:
|
||||
if branch["name"].startswith(pattern):
|
||||
matches[branch["name"]] = branch
|
||||
|
||||
return matches
|
||||
|
||||
def get_package_versions(
|
||||
self,
|
||||
package_name: str,
|
||||
package_type: str = "container",
|
||||
) -> List:
|
||||
package_name = quote(package_name, safe="")
|
||||
endpoint = self._PACKAGES_VERSIONS_ENDPOINT.format(
|
||||
ORG=self._owner_or_org,
|
||||
PACKAGE_TYPE=package_type,
|
||||
PACKAGE_NAME=package_name,
|
||||
)
|
||||
|
||||
internal_data = self._read_all_pages(endpoint)
|
||||
|
||||
return internal_data
|
||||
|
||||
def filter_packages_by_tag_pattern(self, package_data, pattern: str):
|
||||
matches = {}
|
||||
|
||||
for package in package_data:
|
||||
if "metadata" in package and "container" in package["metadata"]:
|
||||
container_metadata = package["metadata"]["container"]
|
||||
if "tags" in container_metadata:
|
||||
container_tags = container_metadata["tags"]
|
||||
for tag in container_tags:
|
||||
if tag.startswith(pattern):
|
||||
matches[tag] = package
|
||||
break
|
||||
|
||||
return matches
|
||||
|
||||
def filter_packages_untagged(self, package_data):
|
||||
matches = {}
|
||||
|
||||
for package in package_data:
|
||||
if "metadata" in package and "container" in package["metadata"]:
|
||||
container_metadata = package["metadata"]["container"]
|
||||
if "tags" in container_metadata:
|
||||
container_tags = container_metadata["tags"]
|
||||
if not len(container_tags):
|
||||
matches[package["name"]] = package
|
||||
|
||||
return matches
|
||||
|
||||
def delete_package_version(self, package_name, package_data):
|
||||
package_name = quote(package_name, safe="")
|
||||
endpoint = self._PACKAGE_VERSION_DELETE_ENDPOINT.format(
|
||||
ORG=self._owner_or_org,
|
||||
PACKAGE_TYPE=package_data["metadata"]["package_type"],
|
||||
PACKAGE_NAME=package_name,
|
||||
PACKAGE_VERSION_ID=package_data["id"],
|
||||
)
|
||||
resp = self._session.delete(endpoint)
|
||||
if resp.status_code != 204:
|
||||
logger.warning(
|
||||
f"Request to delete {endpoint} returned HTTP {resp.status_code}",
|
||||
)
|
||||
|
||||
|
||||
def _main():
|
||||
parser = ArgumentParser(
|
||||
description="Using the GitHub API locate and optionally delete container"
|
||||
" tags which no longer have an associated feature branch",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--delete",
|
||||
action="store_true",
|
||||
default=False,
|
||||
help="If provided, actually delete the container tags",
|
||||
)
|
||||
|
||||
# TODO There's a lot of untagged images, do those need to stay for anything?
|
||||
parser.add_argument(
|
||||
"--untagged",
|
||||
action="store_true",
|
||||
default=False,
|
||||
help="If provided, delete untagged containers as well",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--loglevel",
|
||||
default="info",
|
||||
help="Configures the logging level",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
logging.basicConfig(
|
||||
level=get_log_level(args),
|
||||
datefmt="%Y-%m-%d %H:%M:%S",
|
||||
format="%(asctime)s %(levelname)-8s %(message)s",
|
||||
)
|
||||
|
||||
repo_owner: Final[str] = os.environ["GITHUB_REPOSITORY_OWNER"]
|
||||
repo: Final[str] = os.environ["GITHUB_REPOSITORY"]
|
||||
gh_token: Final[str] = os.environ["GITHUB_TOKEN"]
|
||||
|
||||
with requests.session() as sess:
|
||||
with GithubContainerRegistry(sess, gh_token, repo_owner) as gh_api:
|
||||
all_branches = gh_api.get_branches("paperless-ngx")
|
||||
logger.info(f"Located {len(all_branches)} branches of {repo_owner}/{repo} ")
|
||||
|
||||
feature_branches = gh_api.filter_branches_by_name_pattern(
|
||||
all_branches,
|
||||
"feature-",
|
||||
)
|
||||
logger.info(f"Located {len(feature_branches)} feature branches")
|
||||
|
||||
for package_name in ["paperless-ngx", "paperless-ngx/builder/cache/app"]:
|
||||
|
||||
all_package_versions = gh_api.get_package_versions(package_name)
|
||||
logger.info(
|
||||
f"Located {len(all_package_versions)} versions of package {package_name}",
|
||||
)
|
||||
|
||||
packages_tagged_feature = gh_api.filter_packages_by_tag_pattern(
|
||||
all_package_versions,
|
||||
"feature-",
|
||||
)
|
||||
logger.info(
|
||||
f'Located {len(packages_tagged_feature)} versions of package {package_name} tagged "feature-"',
|
||||
)
|
||||
|
||||
untagged_packages = gh_api.filter_packages_untagged(
|
||||
all_package_versions,
|
||||
)
|
||||
logger.info(
|
||||
f"Located {len(untagged_packages)} untagged versions of package {package_name}",
|
||||
)
|
||||
|
||||
to_delete = list(
|
||||
set(packages_tagged_feature.keys()) - set(feature_branches.keys()),
|
||||
)
|
||||
logger.info(
|
||||
f"Located {len(to_delete)} versions of package {package_name} to delete",
|
||||
)
|
||||
|
||||
for tag_to_delete in to_delete:
|
||||
package_version_info = packages_tagged_feature[tag_to_delete]
|
||||
|
||||
if args.delete:
|
||||
logger.info(
|
||||
f"Deleting {tag_to_delete} (id {package_version_info['id']})",
|
||||
)
|
||||
gh_api.delete_package_version(
|
||||
package_name,
|
||||
package_version_info,
|
||||
)
|
||||
|
||||
else:
|
||||
logger.info(
|
||||
f"Would delete {tag_to_delete} (id {package_version_info['id']})",
|
||||
)
|
||||
|
||||
if args.untagged:
|
||||
logger.info(f"Deleting untagged packages of {package_name}")
|
||||
for to_delete_name in untagged_packages:
|
||||
to_delete_version = untagged_packages[to_delete_name]
|
||||
logger.info(f"Deleting id {to_delete_version['id']}")
|
||||
if args.delete:
|
||||
gh_api.delete_package_version(
|
||||
package_name,
|
||||
to_delete_version,
|
||||
)
|
||||
else:
|
||||
logger.info("Leaving untagged images untouched")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
_main()
|
44
.github/scripts/common.py
vendored
@@ -1,44 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import logging
|
||||
from argparse import ArgumentError
|
||||
|
||||
|
||||
def get_image_tag(
|
||||
repo_name: str,
|
||||
pkg_name: str,
|
||||
pkg_version: str,
|
||||
) -> str:
|
||||
"""
|
||||
Returns a string representing the normal image for a given package
|
||||
"""
|
||||
return f"ghcr.io/{repo_name.lower()}/builder/{pkg_name}:{pkg_version}"
|
||||
|
||||
|
||||
def get_cache_image_tag(
|
||||
repo_name: str,
|
||||
pkg_name: str,
|
||||
pkg_version: str,
|
||||
branch_name: str,
|
||||
) -> str:
|
||||
"""
|
||||
Returns a string representing the expected image cache tag for a given package
|
||||
|
||||
Registry type caching is utilized for the builder images, to allow fast
|
||||
rebuilds, generally almost instant for the same version
|
||||
"""
|
||||
return f"ghcr.io/{repo_name.lower()}/builder/cache/{pkg_name}:{pkg_version}"
|
||||
|
||||
|
||||
def get_log_level(args) -> int:
|
||||
levels = {
|
||||
"critical": logging.CRITICAL,
|
||||
"error": logging.ERROR,
|
||||
"warn": logging.WARNING,
|
||||
"warning": logging.WARNING,
|
||||
"info": logging.INFO,
|
||||
"debug": logging.DEBUG,
|
||||
}
|
||||
level = levels.get(args.loglevel.lower())
|
||||
if level is None:
|
||||
level = logging.INFO
|
||||
return level
|
92
.github/scripts/get-build-json.py
vendored
@@ -1,92 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
This is a helper script for the mutli-stage Docker image builder.
|
||||
It provides a single point of configuration for package version control.
|
||||
The output JSON object is used by the CI workflow to determine what versions
|
||||
to build and pull into the final Docker image.
|
||||
|
||||
Python package information is obtained from the Pipfile.lock. As this is
|
||||
kept updated by dependabot, it usually will need no further configuration.
|
||||
The sole exception currently is pikepdf, which has a dependency on qpdf,
|
||||
and is configured here to use the latest version of qpdf built by the workflow.
|
||||
|
||||
Other package version information is configured directly below, generally by
|
||||
setting the version and Git information, if any.
|
||||
|
||||
"""
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Final
|
||||
|
||||
from common import get_cache_image_tag
|
||||
from common import get_image_tag
|
||||
|
||||
|
||||
def _main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate a JSON object of information required to build the given package, based on the Pipfile.lock",
|
||||
)
|
||||
parser.add_argument(
|
||||
"package",
|
||||
help="The name of the package to generate JSON for",
|
||||
)
|
||||
|
||||
PIPFILE_LOCK_PATH: Final[Path] = Path("Pipfile.lock")
|
||||
BUILD_CONFIG_PATH: Final[Path] = Path(".build-config.json")
|
||||
|
||||
# Read the main config file
|
||||
build_json: Final = json.loads(BUILD_CONFIG_PATH.read_text())
|
||||
|
||||
# Read Pipfile.lock file
|
||||
pipfile_data: Final = json.loads(PIPFILE_LOCK_PATH.read_text())
|
||||
|
||||
args: Final = parser.parse_args()
|
||||
|
||||
# Read from environment variables set by GitHub Actions
|
||||
repo_name: Final[str] = os.environ["GITHUB_REPOSITORY"]
|
||||
branch_name: Final[str] = os.environ["GITHUB_REF_NAME"]
|
||||
|
||||
# Default output values
|
||||
version = None
|
||||
extra_config = {}
|
||||
|
||||
if args.package in pipfile_data["default"]:
|
||||
# Read the version from Pipfile.lock
|
||||
pkg_data = pipfile_data["default"][args.package]
|
||||
pkg_version = pkg_data["version"].split("==")[-1]
|
||||
version = pkg_version
|
||||
|
||||
# Any extra/special values needed
|
||||
if args.package == "pikepdf":
|
||||
extra_config["qpdf_version"] = build_json["qpdf"]["version"]
|
||||
|
||||
elif args.package in build_json:
|
||||
version = build_json[args.package]["version"]
|
||||
|
||||
else:
|
||||
raise NotImplementedError(args.package)
|
||||
|
||||
# The JSON object we'll output
|
||||
output = {
|
||||
"name": args.package,
|
||||
"version": version,
|
||||
"image_tag": get_image_tag(repo_name, args.package, version),
|
||||
"cache_tag": get_cache_image_tag(
|
||||
repo_name,
|
||||
args.package,
|
||||
version,
|
||||
branch_name,
|
||||
),
|
||||
}
|
||||
|
||||
# Add anything special a package may need
|
||||
output.update(extra_config)
|
||||
|
||||
# Output the JSON info to stdout
|
||||
print(json.dumps(output))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
_main()
|
15
.github/stale.yml
vendored
@@ -1,15 +0,0 @@
|
||||
# Number of days of inactivity before an issue becomes stale
|
||||
daysUntilStale: 30
|
||||
# Number of days of inactivity before a stale issue is closed
|
||||
daysUntilClose: 7
|
||||
onlyLabels:
|
||||
- unconfirmed
|
||||
# Label to use when marking an issue as stale
|
||||
staleLabel: stale
|
||||
# Comment to post when marking an issue as stale. Set to `false` to disable
|
||||
markComment: >
|
||||
This issue has been automatically marked as stale because it has not had
|
||||
recent activity. It will be closed if no further activity occurs. Thank you
|
||||
for your contributions.
|
||||
# Comment to post when closing a stale issue. Set to `false` to disable
|
||||
closeComment: false
|
368
.github/workflows/ci.yml
vendored
@@ -1,368 +0,0 @@
|
||||
name: ci
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
# https://semver.org/#spec-item-2
|
||||
- 'v[0-9]+.[0-9]+.[0-9]+'
|
||||
# https://semver.org/#spec-item-9
|
||||
- 'v[0-9]+.[0-9]+.[0-9]+-beta.rc[0-9]+'
|
||||
branches-ignore:
|
||||
- 'translations**'
|
||||
pull_request:
|
||||
branches-ignore:
|
||||
- 'translations**'
|
||||
|
||||
jobs:
|
||||
documentation:
|
||||
name: "Build Documentation"
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Install pipenv
|
||||
run: pipx install pipenv
|
||||
-
|
||||
name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.9
|
||||
cache: "pipenv"
|
||||
cache-dependency-path: 'Pipfile.lock'
|
||||
-
|
||||
name: Install dependencies
|
||||
run: |
|
||||
pipenv sync --dev
|
||||
-
|
||||
name: Make documentation
|
||||
run: |
|
||||
cd docs/
|
||||
pipenv run make html
|
||||
-
|
||||
name: Upload artifact
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: documentation
|
||||
path: docs/_build/html/
|
||||
|
||||
ci-backend:
|
||||
uses: ./.github/workflows/reusable-ci-backend.yml
|
||||
|
||||
ci-frontend:
|
||||
uses: ./.github/workflows/reusable-ci-frontend.yml
|
||||
|
||||
prepare-docker-build:
|
||||
name: Prepare Docker Pipeline Data
|
||||
if: github.event_name == 'push' && (startsWith(github.ref, 'refs/heads/feature-') || github.ref == 'refs/heads/dev' || github.ref == 'refs/heads/beta' || contains(github.ref, 'beta.rc') || startsWith(github.ref, 'refs/tags/v'))
|
||||
runs-on: ubuntu-20.04
|
||||
# If the push triggered the installer library workflow, wait for it to
|
||||
# complete here. This ensures the required versions for the final
|
||||
# image have been built, while not waiting at all if the versions haven't changed
|
||||
concurrency:
|
||||
group: build-installer-library
|
||||
cancel-in-progress: false
|
||||
needs:
|
||||
- documentation
|
||||
- ci-backend
|
||||
- ci-frontend
|
||||
steps:
|
||||
-
|
||||
name: Set ghcr repository name
|
||||
id: set-ghcr-repository
|
||||
run: |
|
||||
ghcr_name=$(echo "${GITHUB_REPOSITORY}" | awk '{ print tolower($0) }')
|
||||
echo ::set-output name=repository::${ghcr_name}
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.9"
|
||||
-
|
||||
name: Setup qpdf image
|
||||
id: qpdf-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py qpdf)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=qpdf-json::${build_json}
|
||||
-
|
||||
name: Setup psycopg2 image
|
||||
id: psycopg2-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py psycopg2)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=psycopg2-json::${build_json}
|
||||
-
|
||||
name: Setup pikepdf image
|
||||
id: pikepdf-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py pikepdf)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=pikepdf-json::${build_json}
|
||||
-
|
||||
name: Setup jbig2enc image
|
||||
id: jbig2enc-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py jbig2enc)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=jbig2enc-json::${build_json}
|
||||
|
||||
outputs:
|
||||
|
||||
ghcr-repository: ${{ steps.set-ghcr-repository.outputs.repository }}
|
||||
|
||||
qpdf-json: ${{ steps.qpdf-setup.outputs.qpdf-json }}
|
||||
|
||||
pikepdf-json: ${{ steps.pikepdf-setup.outputs.pikepdf-json }}
|
||||
|
||||
psycopg2-json: ${{ steps.psycopg2-setup.outputs.psycopg2-json }}
|
||||
|
||||
jbig2enc-json: ${{ steps.jbig2enc-setup.outputs.jbig2enc-json}}
|
||||
|
||||
# build and push image to docker hub.
|
||||
build-docker-image:
|
||||
runs-on: ubuntu-20.04
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-build-docker-image-${{ github.ref_name }}
|
||||
cancel-in-progress: true
|
||||
needs:
|
||||
- prepare-docker-build
|
||||
steps:
|
||||
-
|
||||
name: Check pushing to Docker Hub
|
||||
id: docker-hub
|
||||
# Only push to Dockerhub from the main repo AND the ref is either:
|
||||
# main
|
||||
# dev
|
||||
# beta
|
||||
# a tag
|
||||
# Otherwise forks would require a Docker Hub account and secrets setup
|
||||
run: |
|
||||
if [[ ${{ needs.prepare-docker-build.outputs.ghcr-repository }} == "paperless-ngx/paperless-ngx" && ( ${{ github.ref_name }} == "main" || ${{ github.ref_name }} == "dev" || ${{ github.ref_name }} == "beta" || ${{ startsWith(github.ref, 'refs/tags/v') }} == "true" ) ]] ; then
|
||||
echo "Enabling DockerHub image push"
|
||||
echo ::set-output name=enable::"true"
|
||||
else
|
||||
echo "Not pushing to DockerHub"
|
||||
echo ::set-output name=enable::"false"
|
||||
fi
|
||||
-
|
||||
name: Gather Docker metadata
|
||||
id: docker-meta
|
||||
uses: docker/metadata-action@v4
|
||||
with:
|
||||
images: |
|
||||
ghcr.io/${{ needs.prepare-docker-build.outputs.ghcr-repository }}
|
||||
name=paperlessngx/paperless-ngx,enable=${{ steps.docker-hub.outputs.enable }}
|
||||
tags: |
|
||||
# Tag branches with branch name
|
||||
type=ref,event=branch
|
||||
# Process semver tags
|
||||
# For a tag x.y.z or vX.Y.Z, output an x.y.z and x.y image tag
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
-
|
||||
name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v2
|
||||
-
|
||||
name: Login to Github Container Registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
-
|
||||
name: Login to Docker Hub
|
||||
uses: docker/login-action@v2
|
||||
# Don't attempt to login is not pushing to Docker Hub
|
||||
if: steps.docker-hub.outputs.enable == 'true'
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
-
|
||||
name: Build and push
|
||||
uses: docker/build-push-action@v3
|
||||
with:
|
||||
context: .
|
||||
file: ./Dockerfile
|
||||
platforms: linux/amd64,linux/arm/v7,linux/arm64
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.docker-meta.outputs.tags }}
|
||||
labels: ${{ steps.docker-meta.outputs.labels }}
|
||||
build-args: |
|
||||
JBIG2ENC_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.jbig2enc-json).version }}
|
||||
QPDF_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.qpdf-json).version }}
|
||||
PIKEPDF_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.pikepdf-json).version }}
|
||||
PSYCOPG2_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.psycopg2-json).version }}
|
||||
# Get cache layers from this branch, then dev, then main
|
||||
# This allows new branches to get at least some cache benefits, generally from dev
|
||||
cache-from: |
|
||||
type=registry,ref=ghcr.io/${{ needs.prepare-docker-build.outputs.ghcr-repository }}/builder/cache/app:${{ github.ref_name }}
|
||||
type=registry,ref=ghcr.io/${{ needs.prepare-docker-build.outputs.ghcr-repository }}/builder/cache/app:dev
|
||||
type=registry,ref=ghcr.io/${{ needs.prepare-docker-build.outputs.ghcr-repository }}/builder/cache/app:main
|
||||
cache-to: |
|
||||
type=registry,mode=max,ref=ghcr.io/${{ needs.prepare-docker-build.outputs.ghcr-repository }}/builder/cache/app:${{ github.ref_name }}
|
||||
-
|
||||
name: Inspect image
|
||||
run: |
|
||||
docker buildx imagetools inspect ${{ fromJSON(steps.docker-meta.outputs.json).tags[0] }}
|
||||
-
|
||||
name: Export frontend artifact from docker
|
||||
run: |
|
||||
docker create --name frontend-extract ${{ fromJSON(steps.docker-meta.outputs.json).tags[0] }}
|
||||
docker cp frontend-extract:/usr/src/paperless/src/documents/static/frontend src/documents/static/frontend/
|
||||
-
|
||||
name: Upload frontend artifact
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: frontend-compiled
|
||||
path: src/documents/static/frontend/
|
||||
|
||||
build-release:
|
||||
needs:
|
||||
- build-docker-image
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.9
|
||||
-
|
||||
name: Install dependencies
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -qq --no-install-recommends gettext liblept5
|
||||
pip3 install --upgrade pip setuptools wheel
|
||||
pip3 install -r requirements.txt
|
||||
-
|
||||
name: Download frontend artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: frontend-compiled
|
||||
path: src/documents/static/frontend/
|
||||
-
|
||||
name: Download documentation artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: documentation
|
||||
path: docs/_build/html/
|
||||
-
|
||||
name: Move files
|
||||
run: |
|
||||
mkdir dist
|
||||
mkdir dist/paperless-ngx
|
||||
mkdir dist/paperless-ngx/scripts
|
||||
cp .dockerignore .env Dockerfile Pipfile Pipfile.lock LICENSE README.md requirements.txt dist/paperless-ngx/
|
||||
cp paperless.conf.example dist/paperless-ngx/paperless.conf
|
||||
cp gunicorn.conf.py dist/paperless-ngx/gunicorn.conf.py
|
||||
cp docker/ dist/paperless-ngx/docker -r
|
||||
cp scripts/*.service scripts/*.sh dist/paperless-ngx/scripts/
|
||||
cp src/ dist/paperless-ngx/src -r
|
||||
cp docs/_build/html/ dist/paperless-ngx/docs -r
|
||||
-
|
||||
name: Compile messages
|
||||
run: |
|
||||
cd dist/paperless-ngx/src
|
||||
python3 manage.py compilemessages
|
||||
-
|
||||
name: Collect static files
|
||||
run: |
|
||||
cd dist/paperless-ngx/src
|
||||
python3 manage.py collectstatic --no-input
|
||||
-
|
||||
name: Make release package
|
||||
run: |
|
||||
cd dist
|
||||
find . -name __pycache__ | xargs rm -r
|
||||
tar -cJf paperless-ngx.tar.xz paperless-ngx/
|
||||
-
|
||||
name: Upload release artifact
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: release
|
||||
path: dist/paperless-ngx.tar.xz
|
||||
|
||||
publish-release:
|
||||
runs-on: ubuntu-20.04
|
||||
needs:
|
||||
- build-release
|
||||
if: github.ref_type == 'tag' && (startsWith(github.ref_name, 'v') || contains(github.ref_name, '-beta.rc'))
|
||||
steps:
|
||||
-
|
||||
name: Download release artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: release
|
||||
path: ./
|
||||
-
|
||||
name: Get version
|
||||
id: get_version
|
||||
run: |
|
||||
echo ::set-output name=version::${{ github.ref_name }}
|
||||
if [[ ${{ contains(github.ref_name, '-beta.rc') }} == 'true' ]]; then
|
||||
echo ::set-output name=prerelease::true
|
||||
else
|
||||
echo ::set-output name=prerelease::false
|
||||
fi
|
||||
-
|
||||
name: Create Release and Changelog
|
||||
id: create-release
|
||||
uses: release-drafter/release-drafter@v5
|
||||
with:
|
||||
name: Paperless-ngx ${{ steps.get_version.outputs.version }}
|
||||
tag: ${{ steps.get_version.outputs.version }}
|
||||
version: ${{ steps.get_version.outputs.version }}
|
||||
prerelease: ${{ steps.get_version.outputs.prerelease }}
|
||||
publish: true # ensures release is not marked as draft
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
-
|
||||
name: Upload release archive
|
||||
id: upload-release-asset
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.create-release.outputs.upload_url }}
|
||||
asset_path: ./paperless-ngx.tar.xz
|
||||
asset_name: paperless-ngx-${{ steps.get_version.outputs.version }}.tar.xz
|
||||
asset_content_type: application/x-xz
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
ref: main
|
||||
-
|
||||
name: Append Changelog to docs
|
||||
id: append-Changelog
|
||||
working-directory: docs
|
||||
run: |
|
||||
echo -e "# Changelog\n\n${{ steps.create-release.outputs.body }}\n" > changelog-new.md
|
||||
CURRENT_CHANGELOG=`tail --lines +2 changelog.md`
|
||||
echo -e "$CURRENT_CHANGELOG" >> changelog-new.md
|
||||
mv changelog-new.md changelog.md
|
||||
git config --global user.name "github-actions"
|
||||
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
|
||||
git commit -am "Changelog ${{ steps.get_version.outputs.version }} - GHA"
|
||||
git push origin HEAD:main
|
48
.github/workflows/cleanup-tags.yml
vendored
@@ -1,48 +0,0 @@
|
||||
name: Cleanup Image Tags
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 0 * * SAT'
|
||||
delete:
|
||||
pull_request:
|
||||
types:
|
||||
- closed
|
||||
push:
|
||||
paths:
|
||||
- ".github/workflows/cleanup-tags.yml"
|
||||
- ".github/scripts/cleanup-tags.py"
|
||||
- ".github/scripts/common.py"
|
||||
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
jobs:
|
||||
cleanup:
|
||||
name: Cleanup Image Tags
|
||||
runs-on: ubuntu-20.04
|
||||
permissions:
|
||||
packages: write
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Login to Github Container Registry
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
-
|
||||
name: Set up Python
|
||||
uses: actions/setup-python@v3
|
||||
with:
|
||||
python-version: "3.9"
|
||||
-
|
||||
name: Install requests
|
||||
run: |
|
||||
python -m pip install requests
|
||||
-
|
||||
name: Cleanup feature tags
|
||||
run: |
|
||||
python ${GITHUB_WORKSPACE}/.github/scripts/cleanup-tags.py --loglevel info --delete
|
54
.github/workflows/codeql-analysis.yml
vendored
@@ -1,54 +0,0 @@
|
||||
# For most projects, this workflow file will not need changing; you simply need
|
||||
# to commit it to your repository.
|
||||
#
|
||||
# You may wish to alter this file to override the set of languages analyzed,
|
||||
# or to provide custom queries or build logic.
|
||||
#
|
||||
# ******** NOTE ********
|
||||
# We have attempted to detect the languages in your repository. Please check
|
||||
# the `language` matrix defined below to confirm you have the correct set of
|
||||
# supported CodeQL languages.
|
||||
#
|
||||
name: "CodeQL"
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, dev ]
|
||||
pull_request:
|
||||
# The branches below must be a subset of the branches above
|
||||
branches: [ dev ]
|
||||
schedule:
|
||||
- cron: '28 13 * * 5'
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
name: Analyze
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
security-events: write
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
language: [ 'javascript', 'python' ]
|
||||
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
|
||||
# Learn more about CodeQL language support at https://git.io/codeql-language-support
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v2
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v2
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||
# By default, queries listed here will override any specified in a config file.
|
||||
# Prefix the list here with "+" to use these queries and those in the config file.
|
||||
# queries: ./path/to/local/query, your-org/your-repo/queries@main
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v2
|
147
.github/workflows/installer-library.yml
vendored
@@ -1,147 +0,0 @@
|
||||
# This workflow will run to update the installer library of
|
||||
# Docker images. These are the images which provide updated wheels
|
||||
# .deb installation packages or maybe just some compiled library
|
||||
|
||||
name: Build Image Library
|
||||
|
||||
on:
|
||||
push:
|
||||
# Must match one of these branches AND one of the paths
|
||||
# to be triggered
|
||||
branches:
|
||||
- "main"
|
||||
- "dev"
|
||||
- "library-*"
|
||||
- "feature-*"
|
||||
paths:
|
||||
# Trigger the workflow if a Dockerfile changed
|
||||
- "docker-builders/**"
|
||||
# Trigger if a package was updated
|
||||
- ".build-config.json"
|
||||
- "Pipfile.lock"
|
||||
# Also trigger on workflow changes related to the library
|
||||
- ".github/workflows/installer-library.yml"
|
||||
- ".github/workflows/reusable-workflow-builder.yml"
|
||||
- ".github/scripts/**"
|
||||
|
||||
# Set a workflow level concurrency group so primary workflow
|
||||
# can wait for this to complete if needed
|
||||
# DO NOT CHANGE without updating main workflow group
|
||||
concurrency:
|
||||
group: build-installer-library
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
prepare-docker-build:
|
||||
name: Prepare Docker Image Version Data
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
-
|
||||
name: Set ghcr repository name
|
||||
id: set-ghcr-repository
|
||||
run: |
|
||||
ghcr_name=$(echo "${GITHUB_REPOSITORY}" | awk '{ print tolower($0) }')
|
||||
echo ::set-output name=repository::${ghcr_name}
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.9"
|
||||
-
|
||||
name: Setup qpdf image
|
||||
id: qpdf-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py qpdf)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=qpdf-json::${build_json}
|
||||
-
|
||||
name: Setup psycopg2 image
|
||||
id: psycopg2-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py psycopg2)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=psycopg2-json::${build_json}
|
||||
-
|
||||
name: Setup pikepdf image
|
||||
id: pikepdf-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py pikepdf)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=pikepdf-json::${build_json}
|
||||
-
|
||||
name: Setup jbig2enc image
|
||||
id: jbig2enc-setup
|
||||
run: |
|
||||
build_json=$(python ${GITHUB_WORKSPACE}/.github/scripts/get-build-json.py jbig2enc)
|
||||
|
||||
echo ${build_json}
|
||||
|
||||
echo ::set-output name=jbig2enc-json::${build_json}
|
||||
|
||||
outputs:
|
||||
|
||||
ghcr-repository: ${{ steps.set-ghcr-repository.outputs.repository }}
|
||||
|
||||
qpdf-json: ${{ steps.qpdf-setup.outputs.qpdf-json }}
|
||||
|
||||
pikepdf-json: ${{ steps.pikepdf-setup.outputs.pikepdf-json }}
|
||||
|
||||
psycopg2-json: ${{ steps.psycopg2-setup.outputs.psycopg2-json }}
|
||||
|
||||
jbig2enc-json: ${{ steps.jbig2enc-setup.outputs.jbig2enc-json}}
|
||||
|
||||
build-qpdf-debs:
|
||||
name: qpdf
|
||||
needs:
|
||||
- prepare-docker-build
|
||||
uses: ./.github/workflows/reusable-workflow-builder.yml
|
||||
with:
|
||||
dockerfile: ./docker-builders/Dockerfile.qpdf
|
||||
build-json: ${{ needs.prepare-docker-build.outputs.qpdf-json }}
|
||||
build-args: |
|
||||
QPDF_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.qpdf-json).version }}
|
||||
|
||||
build-jbig2enc:
|
||||
name: jbig2enc
|
||||
needs:
|
||||
- prepare-docker-build
|
||||
uses: ./.github/workflows/reusable-workflow-builder.yml
|
||||
with:
|
||||
dockerfile: ./docker-builders/Dockerfile.jbig2enc
|
||||
build-json: ${{ needs.prepare-docker-build.outputs.jbig2enc-json }}
|
||||
build-args: |
|
||||
JBIG2ENC_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.jbig2enc-json).version }}
|
||||
|
||||
build-psycopg2-wheel:
|
||||
name: psycopg2
|
||||
needs:
|
||||
- prepare-docker-build
|
||||
uses: ./.github/workflows/reusable-workflow-builder.yml
|
||||
with:
|
||||
dockerfile: ./docker-builders/Dockerfile.psycopg2
|
||||
build-json: ${{ needs.prepare-docker-build.outputs.psycopg2-json }}
|
||||
build-args: |
|
||||
PSYCOPG2_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.psycopg2-json).version }}
|
||||
|
||||
build-pikepdf-wheel:
|
||||
name: pikepdf
|
||||
needs:
|
||||
- prepare-docker-build
|
||||
- build-qpdf-debs
|
||||
uses: ./.github/workflows/reusable-workflow-builder.yml
|
||||
with:
|
||||
dockerfile: ./docker-builders/Dockerfile.pikepdf
|
||||
build-json: ${{ needs.prepare-docker-build.outputs.pikepdf-json }}
|
||||
build-args: |
|
||||
REPO=${{ needs.prepare-docker-build.outputs.ghcr-repository }}
|
||||
QPDF_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.qpdf-json).version }}
|
||||
PIKEPDF_VERSION=${{ fromJSON(needs.prepare-docker-build.outputs.pikepdf-json).version }}
|
47
.github/workflows/project-actions.yml
vendored
@@ -1,47 +0,0 @@
|
||||
name: Project Automations
|
||||
|
||||
on:
|
||||
issues:
|
||||
types:
|
||||
- opened
|
||||
- reopened
|
||||
pull_request_target: #_target allows access to secrets
|
||||
types:
|
||||
- opened
|
||||
- reopened
|
||||
branches:
|
||||
- main
|
||||
- dev
|
||||
|
||||
env:
|
||||
todo: Todo
|
||||
done: Done
|
||||
in_progress: In Progress
|
||||
|
||||
jobs:
|
||||
issue_opened_or_reopened:
|
||||
name: issue_opened_or_reopened
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'issues' && (github.event.action == 'opened' || github.event.action == 'reopened')
|
||||
steps:
|
||||
- name: Set issue status to ${{ env.todo }}
|
||||
uses: leonsteinhaeuser/project-beta-automations@v1.2.1
|
||||
with:
|
||||
gh_token: ${{ secrets.GH_TOKEN }}
|
||||
organization: paperless-ngx
|
||||
project_id: 2
|
||||
resource_node_id: ${{ github.event.issue.node_id }}
|
||||
status_value: ${{ env.todo }} # Target status
|
||||
pr_opened_or_reopened:
|
||||
name: pr_opened_or_reopened
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'pull_request_target' && (github.event.action == 'opened' || github.event.action == 'reopened')
|
||||
steps:
|
||||
- name: Set PR status to ${{ env.in_progress }}
|
||||
uses: leonsteinhaeuser/project-beta-automations@v1.2.1
|
||||
with:
|
||||
gh_token: ${{ secrets.GH_TOKEN }}
|
||||
organization: paperless-ngx
|
||||
project_id: 2
|
||||
resource_node_id: ${{ github.event.pull_request.node_id }}
|
||||
status_value: ${{ env.in_progress }} # Target status
|
108
.github/workflows/reusable-ci-backend.yml
vendored
@@ -1,108 +0,0 @@
|
||||
name: Backend CI Jobs
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
|
||||
jobs:
|
||||
|
||||
code-checks-backend:
|
||||
name: "Code Style Checks"
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Install checkers
|
||||
run: |
|
||||
pipx install reorder-python-imports
|
||||
pipx install yesqa
|
||||
pipx install add-trailing-comma
|
||||
pipx install flake8
|
||||
-
|
||||
name: Run reorder-python-imports
|
||||
run: |
|
||||
find src/ -type f -name '*.py' ! -path "*/migrations/*" | xargs reorder-python-imports
|
||||
-
|
||||
name: Run yesqa
|
||||
run: |
|
||||
find src/ -type f -name '*.py' ! -path "*/migrations/*" | xargs yesqa
|
||||
-
|
||||
name: Run add-trailing-comma
|
||||
run: |
|
||||
find src/ -type f -name '*.py' ! -path "*/migrations/*" | xargs add-trailing-comma
|
||||
# black is placed after add-trailing-comma because it may format differently
|
||||
# if a trailing comma is added
|
||||
-
|
||||
name: Run black
|
||||
uses: psf/black@stable
|
||||
with:
|
||||
options: "--check --diff"
|
||||
version: "22.3.0"
|
||||
-
|
||||
name: Run flake8 checks
|
||||
run: |
|
||||
cd src/
|
||||
flake8 --max-line-length=88 --ignore=E203,W503
|
||||
|
||||
tests-backend:
|
||||
name: "Tests (${{ matrix.python-version }})"
|
||||
runs-on: ubuntu-20.04
|
||||
needs:
|
||||
- code-checks-backend
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ['3.8', '3.9', '3.10']
|
||||
fail-fast: false
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
-
|
||||
name: Install pipenv
|
||||
run: pipx install pipenv
|
||||
-
|
||||
name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "${{ matrix.python-version }}"
|
||||
cache: "pipenv"
|
||||
cache-dependency-path: 'Pipfile.lock'
|
||||
-
|
||||
name: Install system dependencies
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -qq --no-install-recommends unpaper tesseract-ocr imagemagick ghostscript libzbar0 poppler-utils
|
||||
-
|
||||
name: Install Python dependencies
|
||||
run: |
|
||||
pipenv sync --dev
|
||||
-
|
||||
name: Tests
|
||||
run: |
|
||||
cd src/
|
||||
pipenv run pytest
|
||||
-
|
||||
name: Get changed files
|
||||
id: changed-files-specific
|
||||
uses: tj-actions/changed-files@v23.1
|
||||
with:
|
||||
files: |
|
||||
src/**
|
||||
-
|
||||
name: List all changed files
|
||||
run: |
|
||||
for file in ${{ steps.changed-files-specific.outputs.all_changed_files }}; do
|
||||
echo "${file} was changed"
|
||||
done
|
||||
-
|
||||
name: Publish coverage results
|
||||
if: matrix.python-version == '3.9' && steps.changed-files-specific.outputs.any_changed == 'true'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
# https://github.com/coveralls-clients/coveralls-python/issues/251
|
||||
run: |
|
||||
cd src/
|
||||
pipenv run coveralls --service=github
|
42
.github/workflows/reusable-ci-frontend.yml
vendored
@@ -1,42 +0,0 @@
|
||||
name: Frontend CI Jobs
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
|
||||
jobs:
|
||||
|
||||
code-checks-frontend:
|
||||
name: "Code Style Checks"
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: '16'
|
||||
-
|
||||
name: Install prettier
|
||||
run: |
|
||||
npm install prettier
|
||||
-
|
||||
name: Run prettier
|
||||
run:
|
||||
npx prettier --check --ignore-path Pipfile.lock **/*.js **/*.ts *.md **/*.md
|
||||
tests-frontend:
|
||||
name: "Tests"
|
||||
runs-on: ubuntu-20.04
|
||||
needs:
|
||||
- code-checks-frontend
|
||||
strategy:
|
||||
matrix:
|
||||
node-version: [16.x]
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Use Node.js ${{ matrix.node-version }}
|
||||
uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: ${{ matrix.node-version }}
|
||||
- run: cd src-ui && npm ci
|
||||
- run: cd src-ui && npm run test
|
||||
- run: cd src-ui && npm run e2e:ci
|
53
.github/workflows/reusable-workflow-builder.yml
vendored
@@ -1,53 +0,0 @@
|
||||
name: Reusable Image Builder
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
dockerfile:
|
||||
required: true
|
||||
type: string
|
||||
build-json:
|
||||
required: true
|
||||
type: string
|
||||
build-args:
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ fromJSON(inputs.build-json).name }}-${{ fromJSON(inputs.build-json).version }}
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
build-image:
|
||||
name: Build ${{ fromJSON(inputs.build-json).name }} @ ${{ fromJSON(inputs.build-json).version }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
-
|
||||
name: Login to Github Container Registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
-
|
||||
name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
-
|
||||
name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v2
|
||||
-
|
||||
name: Build ${{ fromJSON(inputs.build-json).name }}
|
||||
uses: docker/build-push-action@v3
|
||||
with:
|
||||
context: .
|
||||
file: ${{ inputs.dockerfile }}
|
||||
tags: ${{ fromJSON(inputs.build-json).image_tag }}
|
||||
platforms: linux/amd64,linux/arm64,linux/arm/v7
|
||||
build-args: ${{ inputs.build-args }}
|
||||
push: true
|
||||
cache-from: type=registry,ref=${{ fromJSON(inputs.build-json).cache_tag }}
|
||||
cache-to: type=registry,mode=max,ref=${{ fromJSON(inputs.build-json).cache_tag }}
|
36
.gitignore
vendored
@@ -57,39 +57,31 @@ docs/_build/
|
||||
# PyBuilder
|
||||
target/
|
||||
|
||||
# Stored PDFs
|
||||
media/documents/*.gpg
|
||||
media/documents/thumbnails/*
|
||||
media/documents/originals/*
|
||||
media/overrides.css
|
||||
media/overrides.js
|
||||
|
||||
# Sqlite database
|
||||
db.sqlite3
|
||||
|
||||
# PyCharm
|
||||
.idea
|
||||
|
||||
# VS Code
|
||||
.vscode
|
||||
/src-ui/.vscode
|
||||
/docs/.vscode
|
||||
|
||||
# Other stuff that doesn't belong
|
||||
.virtualenv
|
||||
virtualenv
|
||||
/venv
|
||||
.venv/
|
||||
/docker-compose.env
|
||||
/docker-compose.yml
|
||||
docker-compose.yml
|
||||
docker-compose.env
|
||||
|
||||
# Used for development
|
||||
scripts/import-for-development
|
||||
scripts/nuke
|
||||
|
||||
# Static files collected by the collectstatic command
|
||||
/static/
|
||||
static/
|
||||
|
||||
# Stored PDFs
|
||||
/media/
|
||||
/data/
|
||||
/paperless.conf
|
||||
/consume/
|
||||
/export/
|
||||
|
||||
# this is where the compiled frontend is moved to.
|
||||
/src/documents/static/frontend/
|
||||
|
||||
# mac os
|
||||
.DS_Store
|
||||
# Classification Models
|
||||
models/
|
||||
|
@@ -1,94 +0,0 @@
|
||||
# This file configures pre-commit hooks.
|
||||
# See https://pre-commit.com/ for general information
|
||||
# See https://pre-commit.com/hooks.html for a listing of possible hooks
|
||||
|
||||
repos:
|
||||
# General hooks
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.3.0
|
||||
hooks:
|
||||
- id: check-docstring-first
|
||||
- id: check-json
|
||||
exclude: "tsconfig.*json"
|
||||
- id: check-yaml
|
||||
- id: check-toml
|
||||
- id: check-executables-have-shebangs
|
||||
- id: end-of-file-fixer
|
||||
exclude_types:
|
||||
- svg
|
||||
- pofile
|
||||
exclude: "(^LICENSE$)"
|
||||
- id: mixed-line-ending
|
||||
args:
|
||||
- "--fix=lf"
|
||||
- id: trailing-whitespace
|
||||
exclude_types:
|
||||
- svg
|
||||
- id: check-case-conflict
|
||||
- id: detect-private-key
|
||||
- repo: https://github.com/pre-commit/mirrors-prettier
|
||||
rev: "v2.7.1"
|
||||
hooks:
|
||||
- id: prettier
|
||||
types_or:
|
||||
- javascript
|
||||
- ts
|
||||
- markdown
|
||||
exclude: "(^Pipfile\\.lock$)"
|
||||
# Python hooks
|
||||
- repo: https://github.com/asottile/reorder_python_imports
|
||||
rev: v3.8.1
|
||||
hooks:
|
||||
- id: reorder-python-imports
|
||||
exclude: "(migrations)"
|
||||
- repo: https://github.com/asottile/yesqa
|
||||
rev: "v1.3.0"
|
||||
hooks:
|
||||
- id: yesqa
|
||||
exclude: "(migrations)"
|
||||
- repo: https://github.com/asottile/add-trailing-comma
|
||||
rev: "v2.2.3"
|
||||
hooks:
|
||||
- id: add-trailing-comma
|
||||
exclude: "(migrations)"
|
||||
- repo: https://gitlab.com/pycqa/flake8
|
||||
rev: 3.9.2
|
||||
hooks:
|
||||
- id: flake8
|
||||
files: ^src/
|
||||
args:
|
||||
- "--config=./src/setup.cfg"
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 22.6.0
|
||||
hooks:
|
||||
- id: black
|
||||
- repo: https://github.com/asottile/pyupgrade
|
||||
rev: v2.37.1
|
||||
hooks:
|
||||
- id: pyupgrade
|
||||
exclude: "(migrations)"
|
||||
args:
|
||||
- "--py38-plus"
|
||||
# Dockerfile hooks
|
||||
- repo: https://github.com/AleksaC/hadolint-py
|
||||
rev: v2.10.0
|
||||
hooks:
|
||||
- id: hadolint
|
||||
args:
|
||||
- --ignore
|
||||
- DL3008 # https://github.com/hadolint/hadolint/wiki/DL3008 (should probably do this at some point)
|
||||
- --ignore
|
||||
- DL3013 # https://github.com/hadolint/hadolint/wiki/DL3013 (should probably do this too at some point)
|
||||
- --ignore
|
||||
- DL3003 # https://github.com/hadolint/hadolint/wiki/DL3003 (seems excessive to use WORKDIR so much)
|
||||
# Shell script hooks
|
||||
- repo: https://github.com/lovesegfault/beautysh
|
||||
rev: v6.2.1
|
||||
hooks:
|
||||
- id: beautysh
|
||||
args:
|
||||
- "--tab"
|
||||
- repo: https://github.com/shellcheck-py/shellcheck-py
|
||||
rev: "v0.8.0.4"
|
||||
hooks:
|
||||
- id: shellcheck
|
@@ -1,4 +0,0 @@
|
||||
# https://prettier.io/docs/en/options.html#semicolons
|
||||
semi: false
|
||||
# https://prettier.io/docs/en/options.html#quotes
|
||||
singleQuote: true
|
@@ -1,16 +0,0 @@
|
||||
# .readthedocs.yml
|
||||
# Read the Docs configuration file
|
||||
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
||||
|
||||
# Required
|
||||
version: 2
|
||||
|
||||
# Build documentation in the docs/ directory with Sphinx
|
||||
sphinx:
|
||||
configuration: docs/conf.py
|
||||
|
||||
# Optionally set the version of Python and requirements required to build your docs
|
||||
python:
|
||||
version: "3.8"
|
||||
install:
|
||||
- requirements: docs/requirements.txt
|
25
.travis.yml
Normal file
@@ -0,0 +1,25 @@
|
||||
language: python
|
||||
|
||||
before_install:
|
||||
- sudo apt-get update -qq
|
||||
- sudo apt-get install -qq libpoppler-cpp-dev unpaper tesseract-ocr tesseract-ocr-eng tesseract-ocr-cat tesseract-ocr-deu
|
||||
|
||||
sudo: false
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- python: 3.4
|
||||
- python: 3.5
|
||||
- python: 3.6
|
||||
|
||||
install:
|
||||
- pip install --requirement requirements.txt
|
||||
- pip install sphinx
|
||||
script:
|
||||
- cd src/
|
||||
- pytest --cov
|
||||
- pycodestyle
|
||||
- sphinx-build -b html ../docs ../docs/_build -W
|
||||
|
||||
after_success:
|
||||
- coveralls
|
10
CODEOWNERS
@@ -1,10 +0,0 @@
|
||||
/.github/workflows/ @paperless-ngx/ci-cd
|
||||
/docker/ @paperless-ngx/ci-cd
|
||||
/scripts/ @paperless-ngx/ci-cd
|
||||
|
||||
/src-ui/ @paperless-ngx/frontend
|
||||
|
||||
/src/ @paperless-ngx/backend
|
||||
Pipfile* @paperless-ngx/backend
|
||||
*.py @paperless-ngx/backend
|
||||
requirements.txt @paperless-ngx/backend
|
@@ -2,127 +2,45 @@
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
Examples of behavior that contributes to creating a positive environment include:
|
||||
|
||||
- Demonstrating empathy and kindness toward other people
|
||||
- Being respectful of differing opinions, viewpoints, and experiences
|
||||
- Giving and gracefully accepting constructive feedback
|
||||
- Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
- Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
* Gracefully accepting constructive criticism
|
||||
* Focusing on what is best for the community
|
||||
* Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
- The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
- Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
- Public or private harassment
|
||||
- Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
- Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
* Unwelcome sexual attention or advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
## Our Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
hello@paperless-ngx.com.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at code@danielquinn.org. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4 to remove puritanical language. The original is available at [http://contributor-covenant.org/version/1/4][version]
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
||||
[homepage]: http://contributor-covenant.org
|
||||
[version]: http://contributor-covenant.org/version/1/4/
|
||||
|
132
CONTRIBUTING.md
@@ -1,132 +0,0 @@
|
||||
# Contributing
|
||||
|
||||
If you feel like contributing to the project, please do! Bug fixes and improvements are always welcome.
|
||||
|
||||
If you want to implement something big:
|
||||
|
||||
- Please start a discussion about that in the issues! Maybe something similar is already in development and we can make it happen together.
|
||||
- When making additions to the project, consider if the majority of users will benefit from your change. If not, you're probably better of forking the project.
|
||||
- Also consider if your change will get in the way of other users. A good change is a change that enhances the experience of some users who want that change and does not affect users who do not care about the change.
|
||||
- Please see the [paperless-ngx merge process](#merging-prs) below.
|
||||
|
||||
## Python
|
||||
|
||||
Paperless supports python 3.8 and 3.9. We format Python code with [Black](https://github.com/psf/black).
|
||||
|
||||
## Branches
|
||||
|
||||
`main` always reflects the latest release. Apart from changes to the documentation or readme, absolutely no functional changes on this branch in between releases.
|
||||
|
||||
`dev` contains all changes that will be part of the next release. Use this branch to start making your changes.
|
||||
|
||||
`feature-X` branches are for experimental stuff that will eventually be merged into dev.
|
||||
|
||||
## Testing:
|
||||
|
||||
Please format and test your code! I know it's a hassle, but it makes sure that your code works now and will allow us to detect regressions easily.
|
||||
|
||||
To test your code, execute `pytest` in the src/ directory. This also generates a html coverage report, which you can use to see if you missed anything important during testing.
|
||||
|
||||
Before you can run `pytest`, ensure to [properly set up your local environment](https://paperless-ngx.readthedocs.io/en/latest/extending.html#initial-setup-and-first-start).
|
||||
|
||||
## More info:
|
||||
|
||||
... is available in the documentation. https://paperless-ngx.readthedocs.io/en/latest/extending.html
|
||||
|
||||
# Merging PRs
|
||||
|
||||
Once you have submitted a **P**ull **R**equest it will be reviewed, approved, and merged by one or more community members of any team. Automated code tests and formatting checks must be passed.
|
||||
|
||||
## Non-Trivial Requests
|
||||
|
||||
PRs deemed `non-trivial` will go through a stricter review process before being merged into `dev`. This is to ensure code quality and complete functionality (free of side effects).
|
||||
|
||||
Examples of `non-trivial` PRs might include:
|
||||
|
||||
- Additional features
|
||||
- Large changes to many distinct files
|
||||
- Breaking or depreciation of existing features
|
||||
|
||||
Our community review process for `non-trivial` PRs is the following:
|
||||
|
||||
1. Must pass usual automated code tests and formatting checks.
|
||||
2. The PR will be assigned and pinged to the appropriately experienced team (i.e. @paperless-ngx/backend for backend changes).
|
||||
3. Development team will check and test code manually (possibly over several days).
|
||||
- You may be asked to make changes or rebase.
|
||||
- The team may ask for additional testing done by @paperless-ngx/test
|
||||
4. **At least two** members of the team will approve and finally merge the request into `dev` 🎉.
|
||||
|
||||
This process might be slow as community members have different schedules and time to dedicate to the Paperless project. However it ensures community code reviews are as brilliantly thorough as they once were with @jonaswinkler.
|
||||
|
||||
# Translating Paperless-ngx
|
||||
|
||||
Some notes about translation:
|
||||
|
||||
- There are two resources:
|
||||
- `src-ui/messages.xlf` contains the translation strings for the front end. This is the most important.
|
||||
- `django.po` contains strings for the administration section of paperless, which is nice to have translated.
|
||||
- Most of the front-end strings are used on buttons, menu items, etc., so ideally the translated string should not be much longer than the English original.
|
||||
- Translation units may contain placeholders. These usually mean that there's a name of a tag or document or something in the string. You can click on the placeholders to copy them.
|
||||
- Translation units may contain plural expressions such as `{PLURAL_VAR, plural, =1 {one result} =0 {no results} other {<placeholder> results}}`. Copy these verbatim and translate only the content in the inner `{}` brackets. Example: `{PLURAL_VAR, plural, =1 {Ein Ergebnis} =0 {Keine Ergebnisse} other {<placeholder> Ergebnisse}}`
|
||||
- Changes to translations on Crowdin will get pushed into the repository automatically.
|
||||
|
||||
## Adding new languages to the codebase
|
||||
|
||||
If a language has already been added, and you would like to contribute new translations or change existing translations, please read the "Translation" section in the README.md file for further details on that.
|
||||
|
||||
If you would like the project to be translated to another language, first head over to https://crwd.in/paperless-ngx to check if that language has already been enabled for translation.
|
||||
If not, please request the language to be added by creating an issue on GitHub. The issue should contain:
|
||||
|
||||
- English name of the language (the localized name can be added on Crowdin).
|
||||
- ISO language code. A list of those can be found here: https://support.crowdin.com/enterprise/language-codes/
|
||||
- Date format commonly used for the language, e.g. dd/mm/yyyy, mm/dd/yyyy, etc.
|
||||
|
||||
After the language has been added and some translations have been made on Crowdin, the language needs to be enabled in the code.
|
||||
Note that there is no need to manually add a .po of .xlf file as those will be automatically generated and imported from Crowdin.
|
||||
The following files need to be changed:
|
||||
|
||||
- src-ui/angular.json (under the _projects/paperless-ui/i18n/locales_ JSON key)
|
||||
- src/paperless/settings.py (in the _LANGUAGES_ array)
|
||||
- src-ui/src/app/services/settings.service.ts (inside the _getLanguageOptions_ method)
|
||||
- src-ui/src/app/app.module.ts (import locale from _angular/common/locales_ and call _registerLocaleData_)
|
||||
|
||||
Please add the language in the correct order, alphabetically by locale.
|
||||
Note that _en-us_ needs to stay on top of the list, as it is the default project language
|
||||
|
||||
If you are familiar with Git, feel free to send a Pull Request with those changes.
|
||||
If not, let us know in the issue you created for the language, so that another developer can make these changes.
|
||||
|
||||
# Organization Structure & Membership
|
||||
|
||||
Paperless-ngx is a community project. We do our best to delegate permission and responsibility among a team of people to ensure the longevity of the project.
|
||||
|
||||
## Structure
|
||||
|
||||
As of writing, there are 21 members in paperless-ngx. 4 of these people have complete administrative privileges to the repo:
|
||||
|
||||
- [@shamoon](https://github.com/shamoon)
|
||||
- [@bauerj](https://github.com/bauerj)
|
||||
- [@qcasey](https://github.com/qcasey)
|
||||
- [@FrankStrieter](https://github.com/FrankStrieter)
|
||||
|
||||
There are 5 teams collaborating on specific tasks within paperless-ngx:
|
||||
|
||||
- @paperless-ngx/backend (Python / django)
|
||||
- @paperless-ngx/frontend (JavaScript / Typescript)
|
||||
- @paperless-ngx/ci-cd (GitHub Actions / Deployment)
|
||||
- @paperless-ngx/issues (Issue triage)
|
||||
- @paperless-ngx/test (General testing for larger PRs)
|
||||
|
||||
## Permissions
|
||||
|
||||
All team members are notified when mentioned or assigned to a relevant issue or pull request. Additionally, each team has slightly different access to paperless-ngx:
|
||||
|
||||
- The **test** team has no special permissions.
|
||||
- The **issues** team has `triage` access. This means they can organize issues and pull requests.
|
||||
- The **backend**, **frontend**, and **ci-cd** teams have `write` access. This means they can approve PRs and push code, containers, releases, and more.
|
||||
|
||||
## Joining
|
||||
|
||||
We are not overly strict with inviting people to the organization. If you have read the [team permissions](#permissions) and think having additional access would enhance your contributions, please reach out to an [admin](#structure) of the team.
|
||||
|
||||
The admins occasionally invite contributors directly if we believe having them on a team will accelerate their work.
|
257
Dockerfile
@@ -1,228 +1,47 @@
|
||||
# syntax=docker/dockerfile:1.4
|
||||
FROM alpine:3.8
|
||||
|
||||
# Pull the installer images from the library
|
||||
# These are all built previously
|
||||
# They provide either a .deb or .whl
|
||||
LABEL maintainer="The Paperless Project https://github.com/danielquinn/paperless" \
|
||||
contributors="Guy Addadi <addadi@gmail.com>, Pit Kleyersburg <pitkley@googlemail.com>, \
|
||||
Sven Fischer <git-dev@linux4tw.de>"
|
||||
|
||||
ARG JBIG2ENC_VERSION
|
||||
ARG QPDF_VERSION
|
||||
ARG PIKEPDF_VERSION
|
||||
ARG PSYCOPG2_VERSION
|
||||
# Copy requirements file and init script
|
||||
COPY requirements.txt /usr/src/paperless/
|
||||
COPY scripts/docker-entrypoint.sh /sbin/docker-entrypoint.sh
|
||||
|
||||
FROM ghcr.io/paperless-ngx/paperless-ngx/builder/jbig2enc:${JBIG2ENC_VERSION} as jbig2enc-builder
|
||||
FROM ghcr.io/paperless-ngx/paperless-ngx/builder/qpdf:${QPDF_VERSION} as qpdf-builder
|
||||
FROM ghcr.io/paperless-ngx/paperless-ngx/builder/pikepdf:${PIKEPDF_VERSION} as pikepdf-builder
|
||||
FROM ghcr.io/paperless-ngx/paperless-ngx/builder/psycopg2:${PSYCOPG2_VERSION} as psycopg2-builder
|
||||
# Set export and consumption directories
|
||||
ENV PAPERLESS_EXPORT_DIR=/export \
|
||||
PAPERLESS_CONSUMPTION_DIR=/consume
|
||||
|
||||
FROM --platform=$BUILDPLATFORM node:16-bullseye-slim AS compile-frontend
|
||||
|
||||
# This stage compiles the frontend
|
||||
# This stage runs once for the native platform, as the outputs are not
|
||||
# dependent on target arch
|
||||
# Inputs: None
|
||||
|
||||
COPY ./src-ui /src/src-ui
|
||||
|
||||
WORKDIR /src/src-ui
|
||||
RUN set -eux \
|
||||
&& npm update npm -g \
|
||||
&& npm ci --omit=optional
|
||||
RUN set -eux \
|
||||
&& ./node_modules/.bin/ng build --configuration production
|
||||
|
||||
FROM python:3.9-slim-bullseye as main-app
|
||||
|
||||
LABEL org.opencontainers.image.authors="paperless-ngx team <hello@paperless-ngx.com>"
|
||||
LABEL org.opencontainers.image.documentation="https://paperless-ngx.readthedocs.io/en/latest/"
|
||||
LABEL org.opencontainers.image.source="https://github.com/paperless-ngx/paperless-ngx"
|
||||
LABEL org.opencontainers.image.url="https://github.com/paperless-ngx/paperless-ngx"
|
||||
LABEL org.opencontainers.image.licenses="GPL-3.0-only"
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
#
|
||||
# Begin installation and configuration
|
||||
# Order the steps below from least often changed to most
|
||||
#
|
||||
|
||||
# copy jbig2enc
|
||||
# Basically will never change again
|
||||
COPY --from=jbig2enc-builder /usr/src/jbig2enc/src/.libs/libjbig2enc* /usr/local/lib/
|
||||
COPY --from=jbig2enc-builder /usr/src/jbig2enc/src/jbig2 /usr/local/bin/
|
||||
COPY --from=jbig2enc-builder /usr/src/jbig2enc/src/*.h /usr/local/include/
|
||||
|
||||
# Packages need for running
|
||||
ARG RUNTIME_PACKAGES="\
|
||||
curl \
|
||||
file \
|
||||
# fonts for text file thumbnail generation
|
||||
fonts-liberation \
|
||||
gettext \
|
||||
ghostscript \
|
||||
gnupg \
|
||||
gosu \
|
||||
icc-profiles-free \
|
||||
imagemagick \
|
||||
media-types \
|
||||
liblept5 \
|
||||
libpq5 \
|
||||
libxml2 \
|
||||
liblcms2-2 \
|
||||
libtiff5 \
|
||||
libxslt1.1 \
|
||||
libfreetype6 \
|
||||
libwebp6 \
|
||||
libopenjp2-7 \
|
||||
libimagequant0 \
|
||||
libraqm0 \
|
||||
libgnutls30 \
|
||||
libjpeg62-turbo \
|
||||
python3 \
|
||||
python3-pip \
|
||||
python3-setuptools \
|
||||
postgresql-client \
|
||||
# For Numpy
|
||||
libatlas3-base \
|
||||
# OCRmyPDF dependencies
|
||||
tesseract-ocr \
|
||||
tesseract-ocr-eng \
|
||||
tesseract-ocr-deu \
|
||||
tesseract-ocr-fra \
|
||||
tesseract-ocr-ita \
|
||||
tesseract-ocr-spa \
|
||||
tzdata \
|
||||
unpaper \
|
||||
# Mime type detection
|
||||
zlib1g \
|
||||
# Barcode splitter
|
||||
libzbar0 \
|
||||
poppler-utils"
|
||||
|
||||
# Install basic runtime packages.
|
||||
# These change very infrequently
|
||||
RUN set -eux \
|
||||
echo "Installing system packages" \
|
||||
&& apt-get update \
|
||||
&& apt-get install --yes --quiet --no-install-recommends ${RUNTIME_PACKAGES} \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& echo "Installing supervisor" \
|
||||
&& python3 -m pip install --default-timeout=1000 --upgrade --no-cache-dir supervisor==4.2.4
|
||||
|
||||
# Copy gunicorn config
|
||||
# Changes very infrequently
|
||||
WORKDIR /usr/src/paperless/
|
||||
|
||||
COPY gunicorn.conf.py .
|
||||
|
||||
# setup docker-specific things
|
||||
# Use mounts to avoid copying installer files into the image
|
||||
# These change sometimes, but rarely
|
||||
ARG DOCKER_SRC=/usr/src/paperless/src/docker/
|
||||
WORKDIR ${DOCKER_SRC}
|
||||
|
||||
COPY [ \
|
||||
"docker/imagemagick-policy.xml", \
|
||||
"docker/supervisord.conf", \
|
||||
"docker/docker-entrypoint.sh", \
|
||||
"docker/docker-prepare.sh", \
|
||||
"docker/paperless_cmd.sh", \
|
||||
"docker/wait-for-redis.py", \
|
||||
"docker/management_script.sh", \
|
||||
"docker/install_management_commands.sh", \
|
||||
"${DOCKER_SRC}" \
|
||||
]
|
||||
|
||||
RUN set -eux \
|
||||
&& echo "Configuring ImageMagick" \
|
||||
&& mv imagemagick-policy.xml /etc/ImageMagick-6/policy.xml \
|
||||
&& echo "Configuring supervisord" \
|
||||
&& mkdir /var/log/supervisord /var/run/supervisord \
|
||||
&& mv supervisord.conf /etc/supervisord.conf \
|
||||
&& echo "Setting up Docker scripts" \
|
||||
&& mv docker-entrypoint.sh /sbin/docker-entrypoint.sh \
|
||||
&& chmod 755 /sbin/docker-entrypoint.sh \
|
||||
&& mv docker-prepare.sh /sbin/docker-prepare.sh \
|
||||
&& chmod 755 /sbin/docker-prepare.sh \
|
||||
&& mv wait-for-redis.py /sbin/wait-for-redis.py \
|
||||
&& chmod 755 /sbin/wait-for-redis.py \
|
||||
&& mv paperless_cmd.sh /usr/local/bin/paperless_cmd.sh \
|
||||
&& chmod 755 /usr/local/bin/paperless_cmd.sh \
|
||||
&& echo "Installing managment commands" \
|
||||
&& chmod +x install_management_commands.sh \
|
||||
&& ./install_management_commands.sh
|
||||
|
||||
# Install the built packages from the installer library images
|
||||
# Use mounts to avoid copying installer files into the image
|
||||
# These change sometimes
|
||||
RUN --mount=type=bind,from=qpdf-builder,target=/qpdf \
|
||||
--mount=type=bind,from=psycopg2-builder,target=/psycopg2 \
|
||||
--mount=type=bind,from=pikepdf-builder,target=/pikepdf \
|
||||
set -eux \
|
||||
&& echo "Installing qpdf" \
|
||||
&& apt-get install --yes --no-install-recommends /qpdf/usr/src/qpdf/libqpdf28_*.deb \
|
||||
&& apt-get install --yes --no-install-recommends /qpdf/usr/src/qpdf/qpdf_*.deb \
|
||||
&& echo "Installing pikepdf and dependencies" \
|
||||
&& python3 -m pip install --no-cache-dir /pikepdf/usr/src/wheels/packaging*.whl \
|
||||
&& python3 -m pip install --no-cache-dir /pikepdf/usr/src/wheels/lxml*.whl \
|
||||
&& python3 -m pip install --no-cache-dir /pikepdf/usr/src/wheels/Pillow*.whl \
|
||||
&& python3 -m pip install --no-cache-dir /pikepdf/usr/src/wheels/pyparsing*.whl \
|
||||
&& python3 -m pip install --no-cache-dir /pikepdf/usr/src/wheels/pikepdf*.whl \
|
||||
&& python -m pip list \
|
||||
&& echo "Installing psycopg2" \
|
||||
&& python3 -m pip install --no-cache-dir /psycopg2/usr/src/wheels/psycopg2*.whl \
|
||||
&& python -m pip list
|
||||
|
||||
# Python dependencies
|
||||
# Change pretty frequently
|
||||
COPY requirements.txt ../
|
||||
|
||||
# Packages needed only for building a few quick Python
|
||||
# dependencies
|
||||
ARG BUILD_PACKAGES="\
|
||||
build-essential \
|
||||
git \
|
||||
python3-dev"
|
||||
|
||||
RUN set -eux \
|
||||
&& echo "Installing build system packages" \
|
||||
&& apt-get update \
|
||||
&& apt-get install --yes --quiet --no-install-recommends ${BUILD_PACKAGES} \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade wheel \
|
||||
&& echo "Installing Python requirements" \
|
||||
&& python3 -m pip install --default-timeout=1000 --no-cache-dir -r ../requirements.txt \
|
||||
&& echo "Cleaning up image" \
|
||||
&& apt-get -y purge ${BUILD_PACKAGES} \
|
||||
&& apt-get -y autoremove --purge \
|
||||
&& apt-get clean --yes \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& rm -rf /tmp/* \
|
||||
&& rm -rf /var/tmp/* \
|
||||
&& rm -rf /var/cache/apt/archives/* \
|
||||
&& truncate -s 0 /var/log/*log
|
||||
|
||||
WORKDIR /usr/src/paperless/src/
|
||||
|
||||
# copy backend
|
||||
COPY ./src ./
|
||||
|
||||
# copy frontend
|
||||
COPY --from=compile-frontend /src/src/documents/static/frontend/ ./documents/static/frontend/
|
||||
|
||||
# add users, setup scripts
|
||||
RUN set -eux \
|
||||
&& addgroup --gid 1000 paperless \
|
||||
&& useradd --uid 1000 --gid paperless --home-dir /usr/src/paperless paperless \
|
||||
&& chown -R paperless:paperless ../ \
|
||||
&& gosu paperless python3 manage.py collectstatic --clear --no-input \
|
||||
&& gosu paperless python3 manage.py compilemessages
|
||||
|
||||
VOLUME ["/usr/src/paperless/data", \
|
||||
"/usr/src/paperless/media", \
|
||||
"/usr/src/paperless/consume", \
|
||||
"/usr/src/paperless/export"]
|
||||
RUN apk update --no-cache && apk add python3 gnupg libmagic libpq bash shadow curl \
|
||||
sudo poppler tesseract-ocr imagemagick ghostscript unpaper optipng && \
|
||||
apk add --virtual .build-dependencies \
|
||||
python3-dev poppler-dev postgresql-dev gcc g++ musl-dev zlib-dev jpeg-dev && \
|
||||
# Install python dependencies
|
||||
python3 -m ensurepip && \
|
||||
rm -r /usr/lib/python*/ensurepip && \
|
||||
cd /usr/src/paperless && \
|
||||
pip3 install --no-cache-dir -r requirements.txt && \
|
||||
# Remove build dependencies
|
||||
apk del .build-dependencies && \
|
||||
# Create the consumption directory
|
||||
mkdir -p $PAPERLESS_CONSUMPTION_DIR && \
|
||||
# Create user
|
||||
addgroup -g 1000 paperless && \
|
||||
adduser -D -u 1000 -G paperless -h /usr/src/paperless paperless && \
|
||||
chown -Rh paperless:paperless /usr/src/paperless && \
|
||||
mkdir -p $PAPERLESS_EXPORT_DIR && \
|
||||
# Setup entrypoint
|
||||
chmod 755 /sbin/docker-entrypoint.sh
|
||||
|
||||
WORKDIR /usr/src/paperless/src
|
||||
# Mount volumes and set Entrypoint
|
||||
VOLUME ["/usr/src/paperless/data", "/usr/src/paperless/media", "/consume", "/export"]
|
||||
ENTRYPOINT ["/sbin/docker-entrypoint.sh"]
|
||||
CMD ["--help"]
|
||||
|
||||
EXPOSE 8000
|
||||
# Copy application
|
||||
COPY src/ /usr/src/paperless/src/
|
||||
COPY data/ /usr/src/paperless/data/
|
||||
COPY media/ /usr/src/paperless/media/
|
||||
|
||||
CMD ["/usr/local/bin/paperless_cmd.sh"]
|
||||
|
81
Pipfile
@@ -3,68 +3,37 @@ url = "https://pypi.python.org/simple"
|
||||
verify_ssl = true
|
||||
name = "pypi"
|
||||
|
||||
[[source]]
|
||||
url = "https://www.piwheels.org/simple"
|
||||
verify_ssl = true
|
||||
name = "piwheels"
|
||||
|
||||
[packages]
|
||||
dateparser = "~=1.1"
|
||||
django = "~=4.0"
|
||||
django-cors-headers = "*"
|
||||
django-extensions = "*"
|
||||
django-filter = "~=22.1"
|
||||
django-q = {editable = true, ref = "paperless-main", git = "https://github.com/paperless-ngx/django-q.git"}
|
||||
djangorestframework = "~=3.13"
|
||||
filelock = "*"
|
||||
fuzzywuzzy = {extras = ["speedup"], version = "*"}
|
||||
gunicorn = "*"
|
||||
imap-tools = "*"
|
||||
langdetect = "*"
|
||||
pathvalidate = "*"
|
||||
pillow = "~=9.2"
|
||||
pikepdf = "~=5.1"
|
||||
python-gnupg = "*"
|
||||
python-dotenv = "*"
|
||||
python-dateutil = "*"
|
||||
python-magic = "*"
|
||||
psycopg2 = "*"
|
||||
redis = "*"
|
||||
scikit-learn="~=1.1"
|
||||
whitenoise = "~=6.2.0"
|
||||
watchdog = "~=2.1.9"
|
||||
whoosh="~=2.7.4"
|
||||
inotifyrecursive = "~=0.3"
|
||||
ocrmypdf = "~=13.4"
|
||||
tqdm = "*"
|
||||
tika = "*"
|
||||
# TODO: This will sadly also install daphne+dependencies,
|
||||
# which an ASGI server we don't need. Adds about 15MB image size.
|
||||
channels = "~=3.0"
|
||||
channels-redis = "*"
|
||||
uvicorn = {extras = ["standard"], version = "*"}
|
||||
concurrent-log-handler = "*"
|
||||
"pdfminer.six" = "*"
|
||||
"backports.zoneinfo" = {version = "*", markers = "python_version < '3.9'"}
|
||||
"importlib-resources" = {version = "*", markers = "python_version < '3.9'"}
|
||||
zipp = {version = "*", markers = "python_version < '3.9'"}
|
||||
pyzbar = "*"
|
||||
pdf2image = "*"
|
||||
|
||||
[dev-packages]
|
||||
django = "<2.1,>=2.0"
|
||||
pillow = "*"
|
||||
coveralls = "*"
|
||||
dateparser = "*"
|
||||
django-cors-headers = "*"
|
||||
django-crispy-forms = "*"
|
||||
django-extensions = "*"
|
||||
django-filter = "*"
|
||||
djangorestframework = "*"
|
||||
factory-boy = "*"
|
||||
filemagic = "*"
|
||||
fuzzywuzzy = {extras = ["speedup"], version = "==0.15.0"}
|
||||
gunicorn = "*"
|
||||
inotify-simple = "*"
|
||||
langdetect = "*"
|
||||
pdftotext = "*"
|
||||
pyocr = "*"
|
||||
python-dateutil = "*"
|
||||
python-dotenv = "*"
|
||||
python-gnupg = "*"
|
||||
pytz = "*"
|
||||
sphinx = "*"
|
||||
tox = "*"
|
||||
pycodestyle = "*"
|
||||
pytest = "*"
|
||||
pytest-cov = "*"
|
||||
pytest-django = "*"
|
||||
pytest-env = "*"
|
||||
pytest-sugar = "*"
|
||||
pytest-env = "*"
|
||||
pytest-xdist = "*"
|
||||
sphinx = "~=5.0.2"
|
||||
sphinx_rtd_theme = "*"
|
||||
tox = "*"
|
||||
black = "*"
|
||||
pre-commit = "*"
|
||||
sphinx-autobuild = "*"
|
||||
myst-parser = "*"
|
||||
|
||||
[dev-packages]
|
||||
ipython = "*"
|
||||
|
2540
Pipfile.lock
generated
84
README-de.md
Normal file
@@ -0,0 +1,84 @@
|
||||
*[English](README.md)*<br/>
|
||||
*[Greek](README-el.md)*
|
||||
|
||||
# Paperless
|
||||
|
||||
[](https://paperless.readthedocs.org/) [](https://gitter.im/danielquinn/paperless) [](https://travis-ci.org/danielquinn/paperless) [](https://coveralls.io/github/danielquinn/paperless?branch=master) [](https://github.com/danielquinn/paperless/blob/master/THANKS.md)
|
||||
|
||||
Indexiere und archiviere alle deine eingescannten Papierdokumente
|
||||
|
||||
Ich hasse Papier. Abgesehen von Umweltproblemen, ist es der Albtraum einer technisch-interessierten Person:
|
||||
|
||||
* Es gibt keine Suchfunktion
|
||||
* Es braucht physischen Platz
|
||||
* Sicherungen bedeuten mehr Papier
|
||||
|
||||
In den vergangenen Monaten hatte ich mehrmals das Problem, das richtige Dokument nicht zur Hand zu haben. Manchmal warf ich Dokumente weg, die ich noch gebraucht hätte (wer behält schon Wasserrechnungen für zwei Jahre?), andere verlor ich einfach... weil PAPIER. Ich schrieb dies, um mein Leben einfacher zu machen.
|
||||
|
||||
|
||||
|
||||
## Wie es funktioniert
|
||||
|
||||
Paperless steuert nicht deinen Scanner, es hilft nur damit umzugehen, was der Scanner herausspuckt
|
||||
|
||||
1. Kaufe einen Dokumentenscanner, der an einen Ort in deinem Netzwerk schreiben kann. Wenn du Inspirationen brauchst, schau in die [Scannerempfehlungen](https://paperless.readthedocs.io/en/latest/scanners.html).
|
||||
2. Stelle "Scanne zu FTP" oder ähnliches ein. Es sollte möglich sein, eingescannte Bilder ohne etwas tun zu müssen an einen Server hochzuladen. Natürlich kannst du auch die einscannte Datei händisch hochladen, wenn der Scanner automatisches Hochladen nicht unterstützt. Paperless ist es egal, wie die Dokumente in seinen lokalen Konsumordner gelangen.
|
||||
3. Besitze einen Zielserver, lasse das Papierless-Konsumskript laufen, um die Datei mit OCR zu versehen und sie in einer lokalen Datenbank zu indexieren.
|
||||
4. Benutze die Weboberfläche, um die Datenbank zu durchforsten und zu finden, was du suchst.
|
||||
5. Lade die PDF-Datei, die du brauchst/möchtest über die Weboberfläche herunter und mach was auch immer du willst damit. Du kannst es auch drucken und versenden, so als wäre es das Original. In den meisten Fällen, wird das niemanden interessieren oder bemerken.
|
||||
|
||||
Hier das, was du bekommt:
|
||||
|
||||

|
||||
|
||||
|
||||
## Dokumentation
|
||||
|
||||
Diese ist komplett verfügbar auf [ReadTheDocs](https://paperless.readthedocs.org/).
|
||||
|
||||
|
||||
## Anforderungen
|
||||
|
||||
Dies alles ist eine wirklich ziemlich einfache, glänzende und benutzerfreundliche Hülle rund um einige sehr mächtige Werkzeuge.
|
||||
|
||||
* [ImageMagick](http://imagemagick.org/) wandelt Bilder zwischen Farbe und Graustufen um.
|
||||
* [Tesseract](https://github.com/tesseract-ocr) erledigt die Buchstabenerkennung.
|
||||
* [Unpaper](https://www.flameeyes.eu/projects/unpaper) bereinigt und begradigt das eingescannte Bild.
|
||||
* [GNU Privacy Guard](https://gnupg.org/) wird als Verschlüsselungsbackend genutzt.
|
||||
* [Python 3](https://python.org/) ist die Sprache des Projekts.
|
||||
* [Pillow](https://pypi.python.org/pypi/pillowfight/) lädt die Bilddaten als Python-Objekt, um sie mit PyOCR zu verwenden.
|
||||
* [PyOCR](https://github.com/jflesch/pyocr) ist ein glatter, programmatischer Wrapper um Tesseract.
|
||||
* [Django](https://www.djangoproject.com/) ist das Framework, auf das dieses Projekt aufbaut.
|
||||
* [Python-GNUPG](http://pythonhosted.org/python-gnupg/) entschlüsselt die PDFs auf Abruf, um das Herunterladen unverschlüsselter Dateien zu ermöglichen, während die verschlüsselten Dateien auf der Festplatte bleiben.
|
||||
|
||||
|
||||
## Status des Projekts
|
||||
|
||||
Dieses Projekt wurde um 2015 gestartet und es gibt viele Leute, die es verwenden. Warum auch immer ist es ziemlich beliebt in Deutschland -- vielleicht kann jemand dort drüben mich über das Warum aufklären.
|
||||
|
||||
Ich entwickle keine neuen Funktionen mehr für Paperless, weil es genau das tut, was ich brauche und meine Aufmerksamkeit meinem neuesten Projekt [Aletheia](https://github.com/danielquinn/aletheia) gewidmet ist. Ich verlasse jedoch nicht das Projekt. Ich bin glücklich damit, Pull Requests zu begutachten und Fragen im Issue-Bereich zu beantworten. Wenn du ein Entwickler bist und eine neue Funktion willst, reihe sie in den Issues ein und/oder sende einen PR! Ich bin glücklich damit, neue Sachen hinzuzufügen, habe aber einfach nicht die Zeit, sie selbst zu erarbeiten.
|
||||
|
||||
|
||||
## Verknüpfte Prjekte
|
||||
|
||||
Paperless gibt es bereits seit einer Weile und Leute haben damit angefangen, Sachen rund um Paperless zu entwickeln. Wenn du einer dieser Menschen bist, kannst du dein Projekt zu dieser Liste hinzufügen:
|
||||
|
||||
* [Paperless Desktop](https://github.com/thomasbrueggemann/paperless-desktop): Eine Desktop-Oberfläche für deine Paperless-Installation. Läuft auf Mac, Linux und Windows.
|
||||
* [ansible-role-paperless](https://github.com/ovv/ansible-role-paperless): Eine einfache Möglichkeit, Paperless via Ansible laufen zu lassen.
|
||||
|
||||
|
||||
## Ähnliche Projekte
|
||||
|
||||
Es gibt da draußen auch das Projekt [Mayan EDMS](https://mayan.readthedocs.org/en/latest/), welches überraschenderweise sehr große überschneidende Techniken hat wie Paperless. Mayan EDMS ist *viel* funktionsreicher und kommt ebenso mit einer glatten UI, aber kommt noch mit Python2; basiert jedoch auch auf Django und verwendet ein Konsummodell mit Tesseract und Unpaper. Es kann sein, dass Paperless weniger Ressourcen verbraucht, aber um ehrlich zu sein, hab ich das noch nicht selbst getestet. Eine Sache jedoch ist klar, *Paperless* ist ein **viel** besserer Name.
|
||||
|
||||
|
||||
## Wichtiger Hinweis
|
||||
|
||||
Dokumentenscanner werden typerweise verwendet, um sensible Dokumente zu scannen. Dinge wie die Sozialversicherungsnummer, Steueraufzeichnungen, Rechnungen, etc. Während Paperless die Originaldateien über das Konsumskript verschlüsselt, sind die OCR-Texte *nicht* verschlüsselt und demnach in Klartext gespeichert (es muss durchsuchbar sein, also wenn jemand eine Idee hat, wie man das mit verschlüsselten Daten tun kann: Ich bin ganz Ohr). Das bedeutet, dass Paperless niemals auf einem nicht vertrauten Host laufen sollte. Stattdessen empfehle ich, wenn du es verwenden willst, es lokal auf einem Server in deinem Zuhause laufen zu lassen.
|
||||
|
||||
|
||||
## Spenden
|
||||
|
||||
Wie mit aller Freier Software, liegt die Macht weniger in den Finanzen als mehr in den gemeinsamen Bemühungen. Ich schätze wirklich jeden Pull Request und Bugreport, der von Benutzern von Paperless getätigt wird, also bitte macht damit weiter. Wenn du jedoch nicht einer für Programmieren/Design/Dokumentation bist und mich wirklich finanziell unterstützen willst, sage ich nicht nein dazu ;-)
|
||||
|
||||
Das Ding ist, mir geht es finanziell OK, also würde ich dich darum bitten, an den [Hochkommissar der Vereinten Nationen für Flüchtlinge](https://donate.unhcr.org/int-en/general) zu spenden. Diese machen wichtige Arbeit und brauchen das Geld viel dringender als ich.
|
81
README-el.md
Normal file
@@ -0,0 +1,81 @@
|
||||
*[English](README.md)*<br/>
|
||||
*[German](README-de.md)*
|
||||
|
||||
# Paperless
|
||||
|
||||
[](https://paperless.readthedocs.org/) [](https://gitter.im/danielquinn/paperless) [](https://travis-ci.org/danielquinn/paperless) [](https://coveralls.io/github/danielquinn/paperless?branch=master) [](https://github.com/danielquinn/paperless/blob/master/THANKS.md)
|
||||
|
||||
Ευρετήριο και αρχείο για όλα σας τα σκαναρισμένα έγγραφα
|
||||
|
||||
Μισώ το χαρτί. Πέρα από τα περιβαλλοντικά ζητήματα, είναι ο εφιάλτης ενός τεχνικού.
|
||||
|
||||
* Δεν υπάρχει η δυνατότητα της αναζήτησης
|
||||
* Πιάνουν πολύ χώρο
|
||||
* Τα αντίγραφα ασφαλείας σημάινουν περισσότερο χαρτί
|
||||
|
||||
Τους τελευταίους μήνες μου έχει τύχει αρκετές φορές να μην μπορώ να βρω το σωστό έγγραφο. Κάποιες φορές ανακύκλωνα το έγγραφο που χρειαζόμουν (ποιος κρατάει τους λογαριασμούς του νερού για 2 χρόνια;;;) και κάποιες φορές απλά το έχανα ... επειδή έτσι είναι τα χαρτιά. Το έκανα αυτό για να κάνω την ζωή μου πιο εύκολη
|
||||
|
||||
|
||||
## Πως δουλεύει
|
||||
|
||||
Η εφαρμογή Paperless δεν ελέγχει το scanner σας, αλλά σας βοηθάει με τα αποτελέσματα του scanner σας.
|
||||
|
||||
1. Αγοράστε ένα scanner με πρόσβαση στο δίκτυο σας. Αν χρειάζεστε έμπνευση, δείτε την σελίδα με τα [προτεινόμενα scanner](https://paperless.readthedocs.io/en/latest/scanners.html).
|
||||
2. Κάντε την ρύθμιση "scan to FTP" ή κάτι παρόμοιο. Θα μπορεί να αποθηκεύει τις σκαναρισμένες εικόνες σε έναν server χωρίς να χρειάζεται να κάνετε κάτι. Φυσικά άμα το scanner σας δεν μπορεί να αποθηκεύσει κάπου τις εικόνες σας αυτόματα μπορείτε να το κάνετε χειροκίνητα. Το Paperless δεν ενδιαφέρεται πως καταλήγουν κάπου τα αρχεία.
|
||||
3. Να έχετε τον server που τρέχει το OCR script του Paperless να έχει ευρετήριο στην τοπική βάση δεδομένων.
|
||||
4. Χρησιμοποιήστε το web frontend για να επιλέξετε βάση δεδομένων και να βρείτε αυτό που θέλετε.
|
||||
5. Κατεβάστε το PDF που θέλετε/χρειάζεστε μέσω του web interface και κάντε ότι θέλετε με αυτό. Μπορείτε ακόμη να το εκτυπώσετε και να το στείλετε, σαν να ήταν το αρχικό. Στις περισσότερες περιπτώσεις κανείς δεν θα το προσέξει ή θα νοιαστεί.
|
||||
|
||||
Αυτό είναι που θα πάρετε:
|
||||
|
||||

|
||||
|
||||
|
||||
## Documentation
|
||||
|
||||
Είναι όλα διαθέσιμα εδώ [ReadTheDocs](https://paperless.readthedocs.org/).
|
||||
|
||||
|
||||
## Απαιτήσεις
|
||||
|
||||
Όλα αυτά είναι πολύ απλά, και φιλικά προς τον χρήστη, μια συλλογή με πολύτιμα εργαλεία.
|
||||
|
||||
* [ImageMagick](http://imagemagick.org/) μετατρέπει τις εικόνες σε έγχρωμες και ασπρόμαυρες.
|
||||
* [Tesseract](https://github.com/tesseract-ocr) κάνει την αναγνώρηση των χαρακτήρων.
|
||||
* [Unpaper](https://www.flameeyes.eu/projects/unpaper) despeckles and deskews the scanned image.
|
||||
* [GNU Privacy Guard](https://gnupg.org/) χρησιμοποιείται για κρυπτογράφηση στο backend.
|
||||
* [Python 3](https://python.org/) είναι η γλώσσα του project.
|
||||
* [Pillow](https://pypi.python.org/pypi/pillowfight/) Φορτώνει την εικόνα σαν αντικείμενο στην python και μπορεί να χρησιμοποιηθεί με PyOCR
|
||||
* [PyOCR](https://github.com/jflesch/pyocr) is a slick programmatic wrapper around tesseract.
|
||||
* [Django](https://www.djangoproject.com/) το framework με το οποίο έγινε το project.
|
||||
* [Python-GNUPG](http://pythonhosted.org/python-gnupg/) Αποκρυπτογραφεί τα PDF αρχεία στη στιγμή ώστε να κατεβάζετε αποκρυπτογραφημένα αρχεία, αφήνοντας τα κρυπτογραφημένα στον δίσκο.
|
||||
|
||||
|
||||
## Σταθερότητα
|
||||
|
||||
Αυτό το project υπάρχει από το 2015 και υπάρχουν αρκετοί άνθρωποι που το χρησιμοποιούν, παρόλα αυτά βρίσκεται σε διαρκή ανάπτυξη (απλά δείτε πότε commit έχουν γίνει στο git history) οπότε μην περιμένετε να είναι 100% σταθερό. Μπορείτε να κάνετε backup την βάση δεδομένων sqlite3, τον φάκελο media και το configuration αρχείο σας ώστε να είστε ασφαλείς.
|
||||
|
||||
|
||||
## Affiliated Projects
|
||||
|
||||
Το Paperless υπάρχει εδώ και κάποιο καιρό και άνθρωποι έχουν αρχίσει να φτιάχνουν πράγματα γύρω από αυτό. Αν είσαι ένας από αυτούς τους ανθρώπους, μπορούμε να βάλουμε το project σου σε αυτήν την λίστα:
|
||||
|
||||
* [Paperless Desktop](https://github.com/thomasbrueggemann/paperless-desktop): Μια desktop εφαρμογή για εγκατάσταση του Paperless. Τρέχει σε Mac, Linux, και Windows.
|
||||
* [ansible-role-paperless](https://github.com/ovv/ansible-role-paperless): Ένας εύκολο τρόπος για να τρέχει το Paperless μέσω Ansible.
|
||||
|
||||
|
||||
## Παρόμοια Projects
|
||||
|
||||
Υπάρχει ένα άλλο ṕroject που λέγεται [Mayan EDMS](https://mayan.readthedocs.org/en/latest/) το οποίο έχει παρόμοια τεχνικά χαρακτηριστικά με το Paperless σε εντυπωσιακό βαθμό. Επίσης βασισμένο στο Django και χρησιμοποιώντας το consumer model με Tesseract και Unpaper, Mayan EDMS έχει *πολλά* περισσότερα χαρακτηριστικά και έρχεται με ένα επιδέξιο UI, αλλά είναι ακόμα σε Python 2. Μπορεί να είναι ότι το Paperless καταναλώνει λιγότερους πόρους, αλλά για να είμαι ειλικρινής, αυτό είναι μια εικασία την οποία δεν έχω επιβεβαιώσει μόνος μου. Ένα πράγμα είναι σίγουρο, το *Paperless* έχει **πολύ** καλύτερο όνομα.
|
||||
|
||||
|
||||
## Σημαντική Σημείωση
|
||||
|
||||
Τα scanner για αρχεία συνήθως χρησιμοποιούνται για ευαίσθητα αρχεία. Πράγματα όπως το ΑΜΚΑ, φορολογικά αρχεία, τιμολόγια κτλπ. Παρόλο που το Paperless κρυπτογραφεί τα αρχικά αρχεία μέσω του consumption script, το κείμενο OCR *δεν είναι* κρυπτογραφημένο και για αυτό αποθηκεύεται (πρέπει να είναι αναζητήσιμο, οπότε αν κάποιος ξέρει να το κάνει αυτό με κρυπτογραφημένα δεδομένα είμαι όλος αυτιά). Αυτό σημάνει ότι το Paperless δεν πρέπει ποτέ να τρέχει σε μη αξιόπιστο πάροχο. Για αυτό συστήνω αν θέλετε να το τρέξετε να το τρέξετε σε έναν τοπικό server σπίτι σας.
|
||||
|
||||
|
||||
## Δωρεές
|
||||
|
||||
Όπως με όλα τα δωρεάν λογισμικά, η δύναμη δεν βρίσκεται στα οικονομικά αλλά στην συλλογική προσπάθεια. Αλήθεια εκτιμώ κάθε pull request και bug report που προσφέρεται από τους χρήστες του Paperless, οπότε σας παρακαλώ συνεχίστε. Αν παρόλα αυτά, δεν μπορείτε να γράψετε κώδικα/να κάνέτε design/να γράψετε documentation, και θέλετε να συνεισφέρετε οικονομικά, δεν θα πω όχι ;-)
|
||||
|
||||
Το θέμα είναι ότι είμαι οικονομικά εντάξει, οπότε θα σας ζητήσω να δωρίσετε τα χρήματα σας εδώ [United Nations High Commissioner for Refugees](https://donate.unhcr.org/int-en/general). Κάνουν σημαντική δουλειά και χρειάζονται τα χρήματα πολύ περισσότερο από ότι εγώ.
|
129
README.md
@@ -1,120 +1,83 @@
|
||||
[](https://github.com/paperless-ngx/paperless-ngx/actions)
|
||||
[](https://crowdin.com/project/paperless-ngx)
|
||||
[](https://paperless-ngx.readthedocs.io/en/latest/?badge=latest)
|
||||
[](https://coveralls.io/github/paperless-ngx/paperless-ngx?branch=master)
|
||||
[](https://matrix.to/#/#paperless:adnidor.de)
|
||||
*[German](README-de.md)*<br/>
|
||||
*[Greek](README-el.md)*
|
||||
|
||||
<p align="center">
|
||||
<img src="https://github.com/paperless-ngx/paperless-ngx/raw/main/resources/logo/web/png/Black%20logo%20-%20no%20background.png#gh-light-mode-only" width="50%" />
|
||||
<img src="https://github.com/paperless-ngx/paperless-ngx/raw/main/resources/logo/web/png/White%20logo%20-%20no%20background.png#gh-dark-mode-only" width="50%" />
|
||||
</p>
|
||||
# Paperless
|
||||
|
||||
<!-- omit in toc -->
|
||||
[](https://paperless.readthedocs.org/) [](https://gitter.im/danielquinn/paperless) [](https://travis-ci.org/danielquinn/paperless) [](https://coveralls.io/github/danielquinn/paperless?branch=master) [](https://github.com/danielquinn/paperless/blob/master/THANKS.md)
|
||||
|
||||
# Paperless-ngx
|
||||
Index and archive all of your scanned paper documents
|
||||
|
||||
Paperless-ngx is a document management system that transforms your physical documents into a searchable online archive so you can keep, well, _less paper_.
|
||||
I hate paper. Environmental issues aside, it's a tech person's nightmare:
|
||||
|
||||
Paperless-ngx forked from [paperless-ng](https://github.com/jonaswinkler/paperless-ng) to continue the great work and distribute responsibility of supporting and advancing the project among a team of people. [Consider joining us!](#community-support) Discussion of this transition can be found in issues
|
||||
[#1599](https://github.com/jonaswinkler/paperless-ng/issues/1599) and [#1632](https://github.com/jonaswinkler/paperless-ng/issues/1632).
|
||||
* There's no search feature
|
||||
* It takes up physical space
|
||||
* Backups mean more paper
|
||||
|
||||
A demo is available at [demo.paperless-ngx.com](https://demo.paperless-ngx.com) using login `demo` / `demo`. _Note: demo content is reset frequently and confidential information should not be uploaded._
|
||||
In the past few months I've been bitten more than a few times by the problem of not having the right document around. Sometimes I recycled a document I needed (who keeps water bills for two years?) and other times I just lost it... because paper. I wrote this to make my life easier.
|
||||
|
||||
- [Features](#features)
|
||||
- [Getting started](#getting-started)
|
||||
- [Contributing](#contributing)
|
||||
- [Community Support](#community-support)
|
||||
- [Translation](#translation)
|
||||
- [Feature Requests](#feature-requests)
|
||||
- [Bugs](#bugs)
|
||||
- [Affiliated Projects](#affiliated-projects)
|
||||
- [Important Note](#important-note)
|
||||
|
||||
# Features
|
||||
## How it Works
|
||||
|
||||

|
||||

|
||||
Paperless does not control your scanner, it only helps you deal with what your scanner produces
|
||||
|
||||
- Organize and index your scanned documents with tags, correspondents, types, and more.
|
||||
- Performs OCR on your documents, adds selectable text to image only documents and adds tags, correspondents and document types to your documents.
|
||||
- Supports PDF documents, images, plain text files, and Office documents (Word, Excel, Powerpoint, and LibreOffice equivalents).
|
||||
- Office document support is optional and provided by Apache Tika (see [configuration](https://paperless-ngx.readthedocs.io/en/latest/configuration.html#tika-settings))
|
||||
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely.
|
||||
- Single page application front end.
|
||||
- Includes a dashboard that shows basic statistics and has document upload.
|
||||
- Filtering by tags, correspondents, types, and more.
|
||||
- Customizable views can be saved and displayed on the dashboard.
|
||||
- Full text search helps you find what you need.
|
||||
- Auto completion suggests relevant words from your documents.
|
||||
- Results are sorted by relevance to your search query.
|
||||
- Highlighting shows you which parts of the document matched the query.
|
||||
- Searching for similar documents ("More like this")
|
||||
- Email processing: Paperless adds documents from your email accounts.
|
||||
- Configure multiple accounts and filters for each account.
|
||||
- When adding documents from mail, paperless can move these mail to a new folder, mark them as read, flag them as important or delete them.
|
||||
- Machine learning powered document matching.
|
||||
- Paperless-ngx learns from your documents and will be able to automatically assign tags, correspondents and types to documents once you've stored a few documents in paperless.
|
||||
- Optimized for multi core systems: Paperless-ngx consumes multiple documents in parallel.
|
||||
- The integrated sanity checker makes sure that your document archive is in good health.
|
||||
- [More screenshots are available in the documentation](https://paperless-ngx.readthedocs.io/en/latest/screenshots.html).
|
||||
1. Buy a document scanner that can write to a place on your network. If you need some inspiration, have a look at the [scanner recommendations](https://paperless.readthedocs.io/en/latest/scanners.html) page.
|
||||
2. Set it up to "scan to FTP" or something similar. It should be able to push scanned images to a server without you having to do anything. Of course if your scanner doesn't know how to automatically upload the file somewhere, you can always do that manually. Paperless doesn't care how the documents get into its local consumption directory.
|
||||
3. Have the target server run the Paperless consumption script to OCR the file and index it into a local database.
|
||||
4. Use the web frontend to sift through the database and find what you want.
|
||||
5. Download the PDF you need/want via the web interface and do whatever you like with it. You can even print it and send it as if it's the original. In most cases, no one will care or notice.
|
||||
|
||||
# Getting started
|
||||
Here's what you get:
|
||||
|
||||
The easiest way to deploy paperless is docker-compose. The files in the [`/docker/compose` directory](https://github.com/paperless-ngx/paperless-ngx/tree/main/docker/compose) are configured to pull the image from Github Packages.
|
||||

|
||||
|
||||
If you'd like to jump right in, you can configure a docker-compose environment with our install script:
|
||||
|
||||
```bash
|
||||
bash -c "$(curl -L https://raw.githubusercontent.com/paperless-ngx/paperless-ngx/main/install-paperless-ngx.sh)"
|
||||
```
|
||||
## Documentation
|
||||
|
||||
Alternatively, you can install the dependencies and setup apache and a database server yourself. The [documentation](https://paperless-ngx.readthedocs.io/en/latest/setup.html#installation) has a step by step guide on how to do it.
|
||||
It's all available on [ReadTheDocs](https://paperless.readthedocs.org/).
|
||||
|
||||
Migrating from Paperless-ng is easy, just drop in the new docker image! See the [documentation on migrating](https://paperless-ngx.readthedocs.io/en/latest/setup.html#migrating-from-paperless-ng) for more details.
|
||||
|
||||
<!-- omit in toc -->
|
||||
## Requirements
|
||||
|
||||
### Documentation
|
||||
This is all really a quite simple, shiny, user-friendly wrapper around some very powerful tools.
|
||||
|
||||
The documentation for Paperless-ngx is available on [ReadTheDocs](https://paperless-ngx.readthedocs.io/).
|
||||
* [ImageMagick](http://imagemagick.org/) converts the images between colour and greyscale.
|
||||
* [Tesseract](https://github.com/tesseract-ocr) does the character recognition.
|
||||
* [Unpaper](https://www.flameeyes.eu/projects/unpaper) despeckles and deskews the scanned image.
|
||||
* [GNU Privacy Guard](https://gnupg.org/) is used as the encryption backend.
|
||||
* [Python 3](https://python.org/) is the language of the project.
|
||||
* [Pillow](https://pypi.python.org/pypi/pillowfight/) loads the image data as a python object to be used with PyOCR.
|
||||
* [PyOCR](https://github.com/jflesch/pyocr) is a slick programmatic wrapper around tesseract.
|
||||
* [Django](https://www.djangoproject.com/) is the framework this project is written against.
|
||||
* [Python-GNUPG](http://pythonhosted.org/python-gnupg/) decrypts the PDFs on-the-fly to allow you to download unencrypted files, leaving the encrypted ones on-disk.
|
||||
|
||||
# Contributing
|
||||
|
||||
If you feel like contributing to the project, please do! Bug fixes, enhancements, visual fixes etc. are always welcome. If you want to implement something big: Please start a discussion about that! The [documentation](https://paperless-ngx.readthedocs.io/en/latest/extending.html) has some basic information on how to get started.
|
||||
## Project Status
|
||||
|
||||
## Community Support
|
||||
This project has been around since 2015, and there's lots of people using it. For some reason, it's really popular in Germany -- maybe someone over there can clue me in as to why?
|
||||
|
||||
People interested in continuing the work on paperless-ngx are encouraged to reach out here on github and in the [Matrix Room](https://matrix.to/#/#paperless:adnidor.de). If you would like to contribute to the project on an ongoing basis there are multiple [teams](https://github.com/orgs/paperless-ngx/people) (frontend, ci/cd, etc) that could use your help so please reach out!
|
||||
I am no longer doing new development on Paperless as it does exactly what I need it to and have since turned my attention to my latest project, [Aletheia](https://github.com/danielquinn/aletheia). However, I'm not abandoning this project. I am happy to field pull requests and answer questions in the issue queue. If you're a developer yourself and want a new feature, float it in the issue queue and/or send me a pull request! I'm happy to add new stuff, but I just don't have the time to do that work myself.
|
||||
|
||||
## Translation
|
||||
|
||||
Paperless-ngx is available in many languages that are coordinated on Crowdin. If you want to help out by translating paperless-ngx into your language, please head over to https://crwd.in/paperless-ngx, and thank you! More details can be found in [CONTRIBUTING.md](https://github.com/paperless-ngx/paperless-ngx/blob/main/CONTRIBUTING.md#translating-paperless-ngx).
|
||||
## Affiliated Projects
|
||||
|
||||
## Feature Requests
|
||||
Paperless has been around a while now, and people are starting to build stuff on top of it. If you're one of those people, we can add your project to this list:
|
||||
|
||||
Feature requests can be submitted via [GitHub Discussions](https://github.com/paperless-ngx/paperless-ngx/discussions/categories/feature-requests), you can search for existing ideas, add your own and vote for the ones you care about.
|
||||
* [Paperless Desktop](https://github.com/thomasbrueggemann/paperless-desktop): A desktop UI for your Paperless installation. Runs on Mac, Linux, and Windows.
|
||||
* [ansible-role-paperless](https://github.com/ovv/ansible-role-paperless): An easy way to get Paperless running via Ansible.
|
||||
|
||||
## Bugs
|
||||
|
||||
For bugs please [open an issue](https://github.com/paperless-ngx/paperless-ngx/issues) or [start a discussion](https://github.com/paperless-ngx/paperless-ngx/discussions) if you have questions.
|
||||
## Similar Projects
|
||||
|
||||
# Affiliated Projects
|
||||
There's another project out there called [Mayan EDMS](https://mayan.readthedocs.org/en/latest/) that has a surprising amount of technical overlap with Paperless. Also based on Django and using a consumer model with Tesseract and Unpaper, Mayan EDMS is *much* more featureful and comes with a slick UI as well, but still in Python 2. It may be that Paperless consumes fewer resources, but to be honest, this is just a guess as I haven't tested this myself. One thing's for certain though, *Paperless* is a **way** better name.
|
||||
|
||||
Paperless has been around a while now, and people are starting to build stuff on top of it. If you're one of those people, we can add your project to this list:
|
||||
|
||||
- [Paperless App](https://github.com/bauerj/paperless_app): An Android/iOS app for Paperless-ngx. Also works with the original Paperless and Paperless-ng.
|
||||
- [Paperless Share](https://github.com/qcasey/paperless_share). Share any files from your Android application with paperless. Very simple, but works with all of the mobile scanning apps out there that allow you to share scanned documents.
|
||||
- [Scan to Paperless](https://github.com/sbrunner/scan-to-paperless): Scan and prepare (crop, deskew, OCR, ...) your documents for Paperless.
|
||||
## Important Note
|
||||
|
||||
These projects also exist, but their status and compatibility with paperless-ngx is unknown.
|
||||
Document scanners are typically used to scan sensitive documents. Things like your social insurance number, tax records, invoices, etc. While Paperless encrypts the original files via the consumption script, the OCR'd text is *not* encrypted and is therefore stored in the clear (it needs to be searchable, so if someone has ideas on how to do that on encrypted data, I'm all ears). This means that Paperless should never be run on an untrusted host. Instead, I recommend that if you do want to use it, run it locally on a server in your own home.
|
||||
|
||||
- [paperless-cli](https://github.com/stgarf/paperless-cli): A golang command line binary to interact with a Paperless instance.
|
||||
|
||||
This project also exists, but needs updates to be compatible with paperless-ngx.
|
||||
## Donations
|
||||
|
||||
- [Paperless Desktop](https://github.com/thomasbrueggemann/paperless-desktop): A desktop UI for your Paperless installation. Runs on Mac, Linux, and Windows.
|
||||
Known issues on Mac: (Could not load reminders and documents)
|
||||
As with all Free software, the power is less in the finances and more in the collective efforts. I really appreciate every pull request and bug report offered up by Paperless' users, so please keep that stuff coming. If however, you're not one for coding/design/documentation, and would like to contribute financially, I won't say no ;-)
|
||||
|
||||
# Important Note
|
||||
|
||||
Document scanners are typically used to scan sensitive documents. Things like your social insurance number, tax records, invoices, etc. Everything is stored in the clear without encryption. This means that Paperless should never be run on an untrusted host. Instead, I recommend that if you do want to use it, run it locally on a server in your own home.
|
||||
The thing is, I'm doing ok for money, so I would instead ask you to donate to the [United Nations High Commissioner for Refugees](https://donate.unhcr.org/int-en/general). They're doing important work and they need the money a lot more than I do.
|
||||
|
19
THANKS.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Thanks for using Paperless!
|
||||
|
||||
Working on this project has been exhausting, but rewarding at the same time.
|
||||
It's just wonderful that so many people are using this thing, and in so many
|
||||
crazy ways.
|
||||
|
||||
This file is here for everyone to post their own stories about how you use this
|
||||
code. It helps me to understand who's using it and why, and maybe to give
|
||||
others an idea of how it might be used. It's based on a Twitter exchange
|
||||
between [John Glanville](https://twitter.com/hexapodium) and
|
||||
[Julia Evans](https://github.com/jvns) and later better defined [here](https://github.com/paulmolluzzo/thanks-md).
|
||||
|
||||
To contribute, simply issue a pull request that appends to this file something
|
||||
like this:
|
||||
|
||||
```
|
||||
### Your Name
|
||||
Some friendly message
|
||||
```
|
@@ -1,43 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Helper script for building the Docker image locally.
|
||||
# Parses and provides the nessecary versions of other images to Docker
|
||||
# before passing in the rest of script args.
|
||||
|
||||
# First Argument: The Dockerfile to build
|
||||
# Other Arguments: Additional arguments to docker build
|
||||
|
||||
# Example Usage:
|
||||
# ./build-docker-image.sh Dockerfile -t paperless-ngx:my-awesome-feature
|
||||
|
||||
set -eux
|
||||
|
||||
if ! command -v jq; then
|
||||
echo "jq required"
|
||||
exit 1
|
||||
elif [ ! -f "$1" ]; then
|
||||
echo "$1 is not a file, please provide the Dockerfile"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse what we can from Pipfile.lock
|
||||
pikepdf_version=$(jq ".default.pikepdf.version" Pipfile.lock | sed 's/=//g' | sed 's/"//g')
|
||||
psycopg2_version=$(jq ".default.psycopg2.version" Pipfile.lock | sed 's/=//g' | sed 's/"//g')
|
||||
# Read this from the other config file
|
||||
qpdf_version=$(jq ".qpdf.version" .build-config.json | sed 's/"//g')
|
||||
jbig2enc_version=$(jq ".jbig2enc.version" .build-config.json | sed 's/"//g')
|
||||
# Get the branch name (used for caching)
|
||||
branch_name=$(git rev-parse --abbrev-ref HEAD)
|
||||
|
||||
# https://docs.docker.com/develop/develop-images/build_enhancements/
|
||||
# Required to use cache-from
|
||||
export DOCKER_BUILDKIT=1
|
||||
|
||||
docker build --file "$1" \
|
||||
--progress=plain \
|
||||
--cache-from ghcr.io/paperless-ngx/paperless-ngx/builder/cache/app:"${branch_name}" \
|
||||
--cache-from ghcr.io/paperless-ngx/paperless-ngx/builder/cache/app:dev \
|
||||
--build-arg JBIG2ENC_VERSION="${jbig2enc_version}" \
|
||||
--build-arg QPDF_VERSION="${qpdf_version}" \
|
||||
--build-arg PIKEPDF_VERSION="${pikepdf_version}" \
|
||||
--build-arg PSYCOPG2_VERSION="${psycopg2_version}" "${@:2}" .
|
@@ -1,6 +0,0 @@
|
||||
commit_message: '[ci skip]'
|
||||
files:
|
||||
- source: /src/locale/en_US/LC_MESSAGES/django.po
|
||||
translation: /src/locale/%locale_with_underscore%/LC_MESSAGES/django.po
|
||||
- source: /src-ui/messages.xlf
|
||||
translation: /src-ui/src/locale/messages.%locale_with_underscore%.xlf
|
@@ -1,14 +0,0 @@
|
||||
# This Dockerfile compiles the frontend
|
||||
# Inputs: None
|
||||
|
||||
FROM node:16-bullseye-slim AS compile-frontend
|
||||
|
||||
COPY ./src /src/src
|
||||
COPY ./src-ui /src/src-ui
|
||||
|
||||
WORKDIR /src/src-ui
|
||||
RUN set -eux \
|
||||
&& npm update npm -g \
|
||||
&& npm ci --omit=optional
|
||||
RUN set -eux \
|
||||
&& ./node_modules/.bin/ng build --configuration production
|
@@ -1,39 +0,0 @@
|
||||
# This Dockerfile compiles the jbig2enc library
|
||||
# Inputs:
|
||||
# - JBIG2ENC_VERSION - the Git tag to checkout and build
|
||||
|
||||
FROM debian:bullseye-slim as main
|
||||
|
||||
LABEL org.opencontainers.image.description="A intermediate image with jbig2enc built"
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
ARG BUILD_PACKAGES="\
|
||||
build-essential \
|
||||
automake \
|
||||
libtool \
|
||||
libleptonica-dev \
|
||||
zlib1g-dev \
|
||||
git \
|
||||
ca-certificates"
|
||||
|
||||
WORKDIR /usr/src/jbig2enc
|
||||
|
||||
# As this is an base image for a multi-stage final image
|
||||
# the added size of the install is basically irrelevant
|
||||
RUN apt-get update --quiet \
|
||||
&& apt-get install --yes --quiet --no-install-recommends ${BUILD_PACKAGES} \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Layers after this point change according to required version
|
||||
# For better caching, seperate the basic installs from
|
||||
# the building
|
||||
|
||||
ARG JBIG2ENC_VERSION
|
||||
|
||||
RUN set -eux \
|
||||
&& git clone --quiet --branch $JBIG2ENC_VERSION https://github.com/agl/jbig2enc .
|
||||
RUN set -eux \
|
||||
&& ./autogen.sh
|
||||
RUN set -eux \
|
||||
&& ./configure && make
|
@@ -1,88 +0,0 @@
|
||||
# This Dockerfile builds the pikepdf wheel
|
||||
# Inputs:
|
||||
# - REPO - Docker repository to pull qpdf from
|
||||
# - QPDF_VERSION - The image qpdf version to copy .deb files from
|
||||
# - PIKEPDF_VERSION - Version of pikepdf to build wheel for
|
||||
|
||||
# Default to pulling from the main repo registry when manually building
|
||||
ARG REPO="paperless-ngx/paperless-ngx"
|
||||
|
||||
ARG QPDF_VERSION
|
||||
FROM ghcr.io/${REPO}/builder/qpdf:${QPDF_VERSION} as qpdf-builder
|
||||
|
||||
# This does nothing, except provide a name for a copy below
|
||||
|
||||
FROM python:3.9-slim-bullseye as main
|
||||
|
||||
LABEL org.opencontainers.image.description="A intermediate image with pikepdf wheel built"
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
ARG BUILD_PACKAGES="\
|
||||
build-essential \
|
||||
python3-dev \
|
||||
python3-pip \
|
||||
# qpdf requirement - https://github.com/qpdf/qpdf#crypto-providers
|
||||
libgnutls28-dev \
|
||||
# lxml requrements - https://lxml.de/installation.html
|
||||
libxml2-dev \
|
||||
libxslt1-dev \
|
||||
# Pillow requirements - https://pillow.readthedocs.io/en/stable/installation.html#external-libraries
|
||||
# JPEG functionality
|
||||
libjpeg62-turbo-dev \
|
||||
# conpressed PNG
|
||||
zlib1g-dev \
|
||||
# compressed TIFF
|
||||
libtiff-dev \
|
||||
# type related services
|
||||
libfreetype-dev \
|
||||
# color management
|
||||
liblcms2-dev \
|
||||
# WebP format
|
||||
libwebp-dev \
|
||||
# JPEG 2000
|
||||
libopenjp2-7-dev \
|
||||
# improved color quantization
|
||||
libimagequant-dev \
|
||||
# complex text layout support
|
||||
libraqm-dev"
|
||||
|
||||
WORKDIR /usr/src
|
||||
|
||||
COPY --from=qpdf-builder /usr/src/qpdf/*.deb ./
|
||||
|
||||
# As this is an base image for a multi-stage final image
|
||||
# the added size of the install is basically irrelevant
|
||||
|
||||
RUN set -eux \
|
||||
&& apt-get update --quiet \
|
||||
&& apt-get install --yes --quiet --no-install-recommends $BUILD_PACKAGES \
|
||||
&& dpkg --install libqpdf28_*.deb \
|
||||
&& dpkg --install libqpdf-dev_*.deb \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade \
|
||||
pip \
|
||||
wheel \
|
||||
# https://pikepdf.readthedocs.io/en/latest/installation.html#requirements
|
||||
pybind11 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Layers after this point change according to required version
|
||||
# For better caching, seperate the basic installs from
|
||||
# the building
|
||||
|
||||
ARG PIKEPDF_VERSION
|
||||
|
||||
RUN set -eux \
|
||||
&& echo "Building pikepdf wheel ${PIKEPDF_VERSION}" \
|
||||
&& mkdir wheels \
|
||||
&& python3 -m pip wheel \
|
||||
# Build the package at the required version
|
||||
pikepdf==${PIKEPDF_VERSION} \
|
||||
# Output the *.whl into this directory
|
||||
--wheel-dir wheels \
|
||||
# Do not use a binary packge for the package being built
|
||||
--no-binary=pikepdf \
|
||||
# Do use binary packages for dependencies
|
||||
--prefer-binary \
|
||||
--no-cache-dir \
|
||||
&& ls -ahl wheels
|
@@ -1,49 +0,0 @@
|
||||
# This Dockerfile builds the psycopg2 wheel
|
||||
# Inputs:
|
||||
# - PSYCOPG2_VERSION - Version to build
|
||||
|
||||
FROM python:3.9-slim-bullseye as main
|
||||
|
||||
LABEL org.opencontainers.image.description="A intermediate image with psycopg2 wheel built"
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
ARG BUILD_PACKAGES="\
|
||||
build-essential \
|
||||
python3-dev \
|
||||
python3-pip \
|
||||
# https://www.psycopg.org/docs/install.html#prerequisites
|
||||
libpq-dev"
|
||||
|
||||
WORKDIR /usr/src
|
||||
|
||||
# As this is an base image for a multi-stage final image
|
||||
# the added size of the install is basically irrelevant
|
||||
|
||||
RUN set -eux \
|
||||
&& apt-get update --quiet \
|
||||
&& apt-get install --yes --quiet --no-install-recommends $BUILD_PACKAGES \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& python3 -m pip install --no-cache-dir --upgrade pip wheel
|
||||
|
||||
# Layers after this point change according to required version
|
||||
# For better caching, seperate the basic installs from
|
||||
# the building
|
||||
|
||||
ARG PSYCOPG2_VERSION
|
||||
|
||||
RUN set -eux \
|
||||
&& echo "Building psycopg2 wheel ${PSYCOPG2_VERSION}" \
|
||||
&& cd /usr/src \
|
||||
&& mkdir wheels \
|
||||
&& python3 -m pip wheel \
|
||||
# Build the package at the required version
|
||||
psycopg2==${PSYCOPG2_VERSION} \
|
||||
# Output the *.whl into this directory
|
||||
--wheel-dir wheels \
|
||||
# Do not use a binary packge for the package being built
|
||||
--no-binary=psycopg2 \
|
||||
# Do use binary packages for dependencies
|
||||
--prefer-binary \
|
||||
--no-cache-dir \
|
||||
&& ls -ahl wheels/
|
@@ -1,53 +0,0 @@
|
||||
FROM debian:bullseye-slim as main
|
||||
|
||||
LABEL org.opencontainers.image.description="A intermediate image with qpdf built"
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
ARG BUILD_PACKAGES="\
|
||||
build-essential \
|
||||
debhelper \
|
||||
debian-keyring \
|
||||
devscripts \
|
||||
equivs \
|
||||
libtool \
|
||||
# https://qpdf.readthedocs.io/en/stable/installation.html#system-requirements
|
||||
libjpeg62-turbo-dev \
|
||||
libgnutls28-dev \
|
||||
packaging-dev \
|
||||
zlib1g-dev"
|
||||
|
||||
WORKDIR /usr/src
|
||||
|
||||
# As this is an base image for a multi-stage final image
|
||||
# the added size of the install is basically irrelevant
|
||||
|
||||
RUN set -eux \
|
||||
&& apt-get update --quiet \
|
||||
&& apt-get install --yes --quiet --no-install-recommends $BUILD_PACKAGES \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Layers after this point change according to required version
|
||||
# For better caching, seperate the basic installs from
|
||||
# the building
|
||||
|
||||
# This must match to pikepdf's minimum at least
|
||||
ARG QPDF_VERSION
|
||||
|
||||
# In order to get the required version of qpdf, it is backported from bookwork
|
||||
# and then built from source
|
||||
RUN set -eux \
|
||||
&& echo "Building qpdf" \
|
||||
&& echo "deb-src http://deb.debian.org/debian/ bookworm main" > /etc/apt/sources.list.d/bookworm-src.list \
|
||||
&& apt-get update \
|
||||
&& mkdir qpdf \
|
||||
&& cd qpdf \
|
||||
&& apt-get source --yes --quiet qpdf=${QPDF_VERSION}-1/bookworm \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& cd qpdf-$QPDF_VERSION \
|
||||
# We don't need to build the tests (also don't run them)
|
||||
&& rm -rf libtests \
|
||||
&& DEBEMAIL=hello@paperless-ngx.com debchange --bpo \
|
||||
&& export DEB_BUILD_OPTIONS="terse nocheck nodoc parallel=2" \
|
||||
&& dpkg-buildpackage --build=binary --unsigned-source --unsigned-changes \
|
||||
&& ls -ahl ../*.deb
|
22
docker-compose.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# Environment variables to set for Paperless
|
||||
# Commented out variables will be replaced with a default within Paperless.
|
||||
#
|
||||
# In addition to what you see here, you can also define any values you find in
|
||||
# paperless.conf.example here. Values like:
|
||||
#
|
||||
# * PAPERLESS_PASSPHRASE
|
||||
# * PAPERLESS_CONSUMPTION_DIR
|
||||
# * PAPERLESS_CONSUME_MAIL_HOST
|
||||
#
|
||||
# ...are all explained in that file but can be defined here, since the Docker
|
||||
# installation doesn't make use of paperless.conf.
|
||||
|
||||
|
||||
# Additional languages to install for text recognition. Note that this is
|
||||
# different from PAPERLESS_OCR_LANGUAGE (default=eng), which defines the
|
||||
# default language used when guessing the language from the OCR output.
|
||||
# PAPERLESS_OCR_LANGUAGES=deu ita
|
||||
|
||||
# You can change the default user and group id to a custom one
|
||||
# USERMAP_UID=1000
|
||||
# USERMAP_GID=1000
|
53
docker-compose.yml.example
Normal file
@@ -0,0 +1,53 @@
|
||||
version: '2.1'
|
||||
|
||||
services:
|
||||
webserver:
|
||||
build: ./
|
||||
# uncomment the following line to start automatically on system boot
|
||||
# restart: always
|
||||
ports:
|
||||
# You can adapt the port you want Paperless to listen on by
|
||||
# modifying the part before the `:`.
|
||||
- "8000:8000"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl" , "-f", "http://localhost:8000"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
volumes:
|
||||
- data:/usr/src/paperless/data
|
||||
- media:/usr/src/paperless/media
|
||||
env_file: docker-compose.env
|
||||
# The reason the line is here is so that the webserver that doesn't do
|
||||
# any text recognition and doesn't have to install unnecessary
|
||||
# languages the user might have set in the env-file by overwriting the
|
||||
# value with nothing.
|
||||
environment:
|
||||
- PAPERLESS_OCR_LANGUAGES=
|
||||
command: ["runserver", "--insecure", "--noreload", "0.0.0.0:8000"]
|
||||
|
||||
consumer:
|
||||
build: ./
|
||||
# uncomment the following line to start automatically on system boot
|
||||
# restart: always
|
||||
depends_on:
|
||||
webserver:
|
||||
condition: service_healthy
|
||||
volumes:
|
||||
- data:/usr/src/paperless/data
|
||||
- media:/usr/src/paperless/media
|
||||
# You have to adapt the local path you want the consumption
|
||||
# directory to mount to by modifying the part before the ':'.
|
||||
- ./consume:/consume
|
||||
# Likewise, you can add a local path to mount a directory for
|
||||
# exporting. This is not strictly needed for paperless to
|
||||
# function, only if you're exporting your files: uncomment
|
||||
# it and fill in a local path if you know you're going to
|
||||
# want to export your documents.
|
||||
# - /path/to/another/arbitrary/place:/export
|
||||
env_file: docker-compose.env
|
||||
command: ["document_consumer"]
|
||||
|
||||
volumes:
|
||||
data:
|
||||
media:
|
@@ -1 +0,0 @@
|
||||
COMPOSE_PROJECT_NAME=paperless
|
@@ -1,38 +0,0 @@
|
||||
# The UID and GID of the user used to run paperless in the container. Set this
|
||||
# to your UID and GID on the host so that you have write access to the
|
||||
# consumption directory.
|
||||
#USERMAP_UID=1000
|
||||
#USERMAP_GID=1000
|
||||
|
||||
# Additional languages to install for text recognition, separated by a
|
||||
# whitespace. Note that this is
|
||||
# different from PAPERLESS_OCR_LANGUAGE (default=eng), which defines the
|
||||
# language used for OCR.
|
||||
# The container installs English, German, Italian, Spanish and French by
|
||||
# default.
|
||||
# See https://packages.debian.org/search?keywords=tesseract-ocr-&searchon=names&suite=buster
|
||||
# for available languages.
|
||||
#PAPERLESS_OCR_LANGUAGES=tur ces
|
||||
|
||||
###############################################################################
|
||||
# Paperless-specific settings #
|
||||
###############################################################################
|
||||
|
||||
# All settings defined in the paperless.conf.example can be used here. The
|
||||
# Docker setup does not use the configuration file.
|
||||
# A few commonly adjusted settings are provided below.
|
||||
|
||||
# This is required if you will be exposing Paperless-ngx on a public domain
|
||||
# (if doing so please consider security measures such as reverse proxy)
|
||||
#PAPERLESS_URL=https://paperless.example.com
|
||||
|
||||
# Adjust this key if you plan to make paperless available publicly. It should
|
||||
# be a very long sequence of random characters. You don't need to remember it.
|
||||
#PAPERLESS_SECRET_KEY=change-me
|
||||
|
||||
# Use this variable to set a timezone for the Paperless Docker containers. If not specified, defaults to UTC.
|
||||
#PAPERLESS_TIME_ZONE=America/Los_Angeles
|
||||
|
||||
# The default language to use for OCR. Set this to the language most of your
|
||||
# documents are written in.
|
||||
#PAPERLESS_OCR_LANGUAGE=eng
|
@@ -1,97 +0,0 @@
|
||||
# docker-compose file for running paperless from the Docker Hub.
|
||||
# This file contains everything paperless needs to run.
|
||||
# Paperless supports amd64, arm and arm64 hardware.
|
||||
#
|
||||
# All compose files of paperless configure paperless in the following way:
|
||||
#
|
||||
# - Paperless is (re)started on system boot, if it was running before shutdown.
|
||||
# - Docker volumes for storing data are managed by Docker.
|
||||
# - Folders for importing and exporting files are created in the same directory
|
||||
# as this file and mounted to the correct folders inside the container.
|
||||
# - Paperless listens on port 8010.
|
||||
#
|
||||
# In addition to that, this docker-compose file adds the following optional
|
||||
# configurations:
|
||||
#
|
||||
# - Instead of SQLite (default), PostgreSQL is used as the database server.
|
||||
#
|
||||
# To install and update paperless with this file, do the following:
|
||||
#
|
||||
# - Open portainer Stacks list and click 'Add stack'
|
||||
# - Paste the contents of this file and assign a name, e.g. 'Paperless'
|
||||
# - Click 'Deploy the stack' and wait for it to be deployed
|
||||
# - Open the list of containers, select paperless_webserver_1
|
||||
# - Click 'Console' and then 'Connect' to open the command line inside the container
|
||||
# - Run 'python3 manage.py createsuperuser' to create a user
|
||||
# - Exit the console
|
||||
#
|
||||
# For more extensive installation and update instructions, refer to the
|
||||
# documentation.
|
||||
|
||||
version: "3.4"
|
||||
services:
|
||||
broker:
|
||||
image: docker.io/library/redis:6.0
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- redisdata:/data
|
||||
|
||||
db:
|
||||
image: docker.io/library/postgres:13
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_DB: paperless
|
||||
POSTGRES_USER: paperless
|
||||
POSTGRES_PASSWORD: paperless
|
||||
|
||||
webserver:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- db
|
||||
- broker
|
||||
ports:
|
||||
- 8010:8000
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
volumes:
|
||||
- data:/usr/src/paperless/data
|
||||
- media:/usr/src/paperless/media
|
||||
- ./export:/usr/src/paperless/export
|
||||
- ./consume:/usr/src/paperless/consume
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://broker:6379
|
||||
PAPERLESS_DBHOST: db
|
||||
# The UID and GID of the user used to run paperless in the container. Set this
|
||||
# to your UID and GID on the host so that you have write access to the
|
||||
# consumption directory.
|
||||
USERMAP_UID: 1000
|
||||
USERMAP_GID: 100
|
||||
# Additional languages to install for text recognition, separated by a
|
||||
# whitespace. Note that this is
|
||||
# different from PAPERLESS_OCR_LANGUAGE (default=eng), which defines the
|
||||
# language used for OCR.
|
||||
# The container installs English, German, Italian, Spanish and French by
|
||||
# default.
|
||||
# See https://packages.debian.org/search?keywords=tesseract-ocr-&searchon=names&suite=buster
|
||||
# for available languages.
|
||||
#PAPERLESS_OCR_LANGUAGES: tur ces
|
||||
# Adjust this key if you plan to make paperless available publicly. It should
|
||||
# be a very long sequence of random characters. You don't need to remember it.
|
||||
#PAPERLESS_SECRET_KEY: change-me
|
||||
# Use this variable to set a timezone for the Paperless Docker containers. If not specified, defaults to UTC.
|
||||
#PAPERLESS_TIME_ZONE: America/Los_Angeles
|
||||
# The default language to use for OCR. Set this to the language most of your
|
||||
# documents are written in.
|
||||
#PAPERLESS_OCR_LANGUAGE: eng
|
||||
|
||||
volumes:
|
||||
data:
|
||||
media:
|
||||
pgdata:
|
||||
redisdata:
|
@@ -1,94 +0,0 @@
|
||||
# docker-compose file for running paperless from the docker container registry.
|
||||
# This file contains everything paperless needs to run.
|
||||
# Paperless supports amd64, arm and arm64 hardware.
|
||||
#
|
||||
# All compose files of paperless configure paperless in the following way:
|
||||
#
|
||||
# - Paperless is (re)started on system boot, if it was running before shutdown.
|
||||
# - Docker volumes for storing data are managed by Docker.
|
||||
# - Folders for importing and exporting files are created in the same directory
|
||||
# as this file and mounted to the correct folders inside the container.
|
||||
# - Paperless listens on port 8000.
|
||||
#
|
||||
# In addition to that, this docker-compose file adds the following optional
|
||||
# configurations:
|
||||
#
|
||||
# - Instead of SQLite (default), PostgreSQL is used as the database server.
|
||||
# - Apache Tika and Gotenberg servers are started with paperless and paperless
|
||||
# is configured to use these services. These provide support for consuming
|
||||
# Office documents (Word, Excel, Power Point and their LibreOffice counter-
|
||||
# parts.
|
||||
#
|
||||
# To install and update paperless with this file, do the following:
|
||||
#
|
||||
# - Copy this file as 'docker-compose.yml' and the files 'docker-compose.env'
|
||||
# and '.env' into a folder.
|
||||
# - Run 'docker-compose pull'.
|
||||
# - Run 'docker-compose run --rm webserver createsuperuser' to create a user.
|
||||
# - Run 'docker-compose up -d'.
|
||||
#
|
||||
# For more extensive installation and update instructions, refer to the
|
||||
# documentation.
|
||||
|
||||
version: "3.4"
|
||||
services:
|
||||
broker:
|
||||
image: docker.io/library/redis:6.0
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- redisdata:/data
|
||||
|
||||
db:
|
||||
image: docker.io/library/postgres:13
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_DB: paperless
|
||||
POSTGRES_USER: paperless
|
||||
POSTGRES_PASSWORD: paperless
|
||||
|
||||
webserver:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- db
|
||||
- broker
|
||||
- gotenberg
|
||||
- tika
|
||||
ports:
|
||||
- 8000:8000
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
volumes:
|
||||
- data:/usr/src/paperless/data
|
||||
- media:/usr/src/paperless/media
|
||||
- ./export:/usr/src/paperless/export
|
||||
- ./consume:/usr/src/paperless/consume
|
||||
env_file: docker-compose.env
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://broker:6379
|
||||
PAPERLESS_DBHOST: db
|
||||
PAPERLESS_TIKA_ENABLED: 1
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
|
||||
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
|
||||
|
||||
gotenberg:
|
||||
image: docker.io/gotenberg/gotenberg:7.4
|
||||
restart: unless-stopped
|
||||
command:
|
||||
- "gotenberg"
|
||||
- "--chromium-disable-routes=true"
|
||||
|
||||
tika:
|
||||
image: ghcr.io/paperless-ngx/tika:latest
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
data:
|
||||
media:
|
||||
pgdata:
|
||||
redisdata:
|
@@ -1,75 +0,0 @@
|
||||
# docker-compose file for running paperless from the Docker Hub.
|
||||
# This file contains everything paperless needs to run.
|
||||
# Paperless supports amd64, arm and arm64 hardware.
|
||||
#
|
||||
# All compose files of paperless configure paperless in the following way:
|
||||
#
|
||||
# - Paperless is (re)started on system boot, if it was running before shutdown.
|
||||
# - Docker volumes for storing data are managed by Docker.
|
||||
# - Folders for importing and exporting files are created in the same directory
|
||||
# as this file and mounted to the correct folders inside the container.
|
||||
# - Paperless listens on port 8000.
|
||||
#
|
||||
# In addition to that, this docker-compose file adds the following optional
|
||||
# configurations:
|
||||
#
|
||||
# - Instead of SQLite (default), PostgreSQL is used as the database server.
|
||||
#
|
||||
# To install and update paperless with this file, do the following:
|
||||
#
|
||||
# - Copy this file as 'docker-compose.yml' and the files 'docker-compose.env'
|
||||
# and '.env' into a folder.
|
||||
# - Run 'docker-compose pull'.
|
||||
# - Run 'docker-compose run --rm webserver createsuperuser' to create a user.
|
||||
# - Run 'docker-compose up -d'.
|
||||
#
|
||||
# For more extensive installation and update instructions, refer to the
|
||||
# documentation.
|
||||
|
||||
version: "3.4"
|
||||
services:
|
||||
broker:
|
||||
image: docker.io/library/redis:6.0
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- redisdata:/data
|
||||
|
||||
db:
|
||||
image: docker.io/library/postgres:13
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_DB: paperless
|
||||
POSTGRES_USER: paperless
|
||||
POSTGRES_PASSWORD: paperless
|
||||
|
||||
webserver:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- db
|
||||
- broker
|
||||
ports:
|
||||
- 8000:8000
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
volumes:
|
||||
- data:/usr/src/paperless/data
|
||||
- media:/usr/src/paperless/media
|
||||
- ./export:/usr/src/paperless/export
|
||||
- ./consume:/usr/src/paperless/consume
|
||||
env_file: docker-compose.env
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://broker:6379
|
||||
PAPERLESS_DBHOST: db
|
||||
|
||||
|
||||
volumes:
|
||||
data:
|
||||
media:
|
||||
pgdata:
|
||||
redisdata:
|
@@ -1,81 +0,0 @@
|
||||
# docker-compose file for running paperless from the docker container registry.
|
||||
# This file contains everything paperless needs to run.
|
||||
# Paperless supports amd64, arm and arm64 hardware.
|
||||
# All compose files of paperless configure paperless in the following way:
|
||||
#
|
||||
# - Paperless is (re)started on system boot, if it was running before shutdown.
|
||||
# - Docker volumes for storing data are managed by Docker.
|
||||
# - Folders for importing and exporting files are created in the same directory
|
||||
# as this file and mounted to the correct folders inside the container.
|
||||
# - Paperless listens on port 8000.
|
||||
#
|
||||
# SQLite is used as the database. The SQLite file is stored in the data volume.
|
||||
#
|
||||
# In addition to that, this docker-compose file adds the following optional
|
||||
# configurations:
|
||||
#
|
||||
# - Apache Tika and Gotenberg servers are started with paperless and paperless
|
||||
# is configured to use these services. These provide support for consuming
|
||||
# Office documents (Word, Excel, Power Point and their LibreOffice counter-
|
||||
# parts.
|
||||
#
|
||||
# To install and update paperless with this file, do the following:
|
||||
#
|
||||
# - Copy this file as 'docker-compose.yml' and the files 'docker-compose.env'
|
||||
# and '.env' into a folder.
|
||||
# - Run 'docker-compose pull'.
|
||||
# - Run 'docker-compose run --rm webserver createsuperuser' to create a user.
|
||||
# - Run 'docker-compose up -d'.
|
||||
#
|
||||
# For more extensive installation and update instructions, refer to the
|
||||
# documentation.
|
||||
|
||||
version: "3.4"
|
||||
services:
|
||||
broker:
|
||||
image: docker.io/library/redis:6.0
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- redisdata:/data
|
||||
|
||||
webserver:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- broker
|
||||
- gotenberg
|
||||
- tika
|
||||
ports:
|
||||
- 8000:8000
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
volumes:
|
||||
- data:/usr/src/paperless/data
|
||||
- media:/usr/src/paperless/media
|
||||
- ./export:/usr/src/paperless/export
|
||||
- ./consume:/usr/src/paperless/consume
|
||||
env_file: docker-compose.env
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://broker:6379
|
||||
PAPERLESS_TIKA_ENABLED: 1
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
|
||||
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
|
||||
|
||||
gotenberg:
|
||||
image: docker.io/gotenberg/gotenberg:7.4
|
||||
restart: unless-stopped
|
||||
command:
|
||||
- "gotenberg"
|
||||
- "--chromium-disable-routes=true"
|
||||
|
||||
tika:
|
||||
image: ghcr.io/paperless-ngx/tika:latest
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
data:
|
||||
media:
|
||||
redisdata:
|
@@ -1,59 +0,0 @@
|
||||
# docker-compose file for running paperless from the Docker Hub.
|
||||
# This file contains everything paperless needs to run.
|
||||
# Paperless supports amd64, arm and arm64 hardware.
|
||||
#
|
||||
# All compose files of paperless configure paperless in the following way:
|
||||
#
|
||||
# - Paperless is (re)started on system boot, if it was running before shutdown.
|
||||
# - Docker volumes for storing data are managed by Docker.
|
||||
# - Folders for importing and exporting files are created in the same directory
|
||||
# as this file and mounted to the correct folders inside the container.
|
||||
# - Paperless listens on port 8000.
|
||||
#
|
||||
# SQLite is used as the database. The SQLite file is stored in the data volume.
|
||||
#
|
||||
# To install and update paperless with this file, do the following:
|
||||
#
|
||||
# - Copy this file as 'docker-compose.yml' and the files 'docker-compose.env'
|
||||
# and '.env' into a folder.
|
||||
# - Run 'docker-compose pull'.
|
||||
# - Run 'docker-compose run --rm webserver createsuperuser' to create a user.
|
||||
# - Run 'docker-compose up -d'.
|
||||
#
|
||||
# For more extensive installation and update instructions, refer to the
|
||||
# documentation.
|
||||
|
||||
version: "3.4"
|
||||
services:
|
||||
broker:
|
||||
image: docker.io/library/redis:6.0
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- redisdata:/data
|
||||
|
||||
webserver:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- broker
|
||||
ports:
|
||||
- 8000:8000
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
volumes:
|
||||
- data:/usr/src/paperless/data
|
||||
- media:/usr/src/paperless/media
|
||||
- ./export:/usr/src/paperless/export
|
||||
- ./consume:/usr/src/paperless/consume
|
||||
env_file: docker-compose.env
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://broker:6379
|
||||
|
||||
|
||||
volumes:
|
||||
data:
|
||||
media:
|
||||
redisdata:
|
@@ -1,158 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
# Adapted from:
|
||||
# https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh
|
||||
# usage: file_env VAR
|
||||
# ie: file_env 'XYZ_DB_PASSWORD' will allow for "$XYZ_DB_PASSWORD_FILE" to
|
||||
# fill in the value of "$XYZ_DB_PASSWORD" from a file, especially for Docker's
|
||||
# secrets feature
|
||||
file_env() {
|
||||
local var="$1"
|
||||
local fileVar="${var}_FILE"
|
||||
|
||||
# Basic validation
|
||||
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
|
||||
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Only export var if the _FILE exists
|
||||
if [ "${!fileVar:-}" ]; then
|
||||
# And the file exists
|
||||
if [[ -f ${!fileVar} ]]; then
|
||||
echo "Setting ${var} from file"
|
||||
val="$(< "${!fileVar}")"
|
||||
export "$var"="$val"
|
||||
else
|
||||
echo "File ${!fileVar} doesn't exist"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
}
|
||||
|
||||
# Source: https://github.com/sameersbn/docker-gitlab/
|
||||
map_uidgid() {
|
||||
USERMAP_ORIG_UID=$(id -u paperless)
|
||||
USERMAP_ORIG_GID=$(id -g paperless)
|
||||
USERMAP_NEW_UID=${USERMAP_UID:-$USERMAP_ORIG_UID}
|
||||
USERMAP_NEW_GID=${USERMAP_GID:-${USERMAP_ORIG_GID:-$USERMAP_NEW_UID}}
|
||||
if [[ ${USERMAP_NEW_UID} != "${USERMAP_ORIG_UID}" || ${USERMAP_NEW_GID} != "${USERMAP_ORIG_GID}" ]]; then
|
||||
echo "Mapping UID and GID for paperless:paperless to $USERMAP_NEW_UID:$USERMAP_NEW_GID"
|
||||
usermod -o -u "${USERMAP_NEW_UID}" paperless
|
||||
groupmod -o -g "${USERMAP_NEW_GID}" paperless
|
||||
fi
|
||||
}
|
||||
|
||||
map_folders() {
|
||||
# Export these so they can be used in docker-prepare.sh
|
||||
export DATA_DIR="${PAPERLESS_DATA_DIR:-/usr/src/paperless/data}"
|
||||
export MEDIA_ROOT_DIR="${PAPERLESS_MEDIA_ROOT:-/usr/src/paperless/media}"
|
||||
}
|
||||
|
||||
initialize() {
|
||||
|
||||
# Setup environment from secrets before anything else
|
||||
for env_var in \
|
||||
PAPERLESS_DBUSER \
|
||||
PAPERLESS_DBPASS \
|
||||
PAPERLESS_SECRET_KEY \
|
||||
PAPERLESS_AUTO_LOGIN_USERNAME \
|
||||
PAPERLESS_ADMIN_USER \
|
||||
PAPERLESS_ADMIN_MAIL \
|
||||
PAPERLESS_ADMIN_PASSWORD; do
|
||||
# Check for a version of this var with _FILE appended
|
||||
# and convert the contents to the env var value
|
||||
file_env ${env_var}
|
||||
done
|
||||
|
||||
# Change the user and group IDs if needed
|
||||
map_uidgid
|
||||
|
||||
# Check for overrides of certain folders
|
||||
map_folders
|
||||
|
||||
local export_dir="/usr/src/paperless/export"
|
||||
|
||||
for dir in "${export_dir}" "${DATA_DIR}" "${DATA_DIR}/index" "${MEDIA_ROOT_DIR}" "${MEDIA_ROOT_DIR}/documents" "${MEDIA_ROOT_DIR}/documents/originals" "${MEDIA_ROOT_DIR}/documents/thumbnails"; do
|
||||
if [[ ! -d "${dir}" ]]; then
|
||||
echo "Creating directory ${dir}"
|
||||
mkdir "${dir}"
|
||||
fi
|
||||
done
|
||||
|
||||
local tmp_dir="/tmp/paperless"
|
||||
echo "Creating directory ${tmp_dir}"
|
||||
mkdir -p "${tmp_dir}"
|
||||
|
||||
set +e
|
||||
echo "Adjusting permissions of paperless files. This may take a while."
|
||||
chown -R paperless:paperless ${tmp_dir}
|
||||
for dir in "${export_dir}" "${DATA_DIR}" "${MEDIA_ROOT_DIR}"; do
|
||||
find "${dir}" -not \( -user paperless -and -group paperless \) -exec chown paperless:paperless {} +
|
||||
done
|
||||
set -e
|
||||
|
||||
${gosu_cmd[@]} /sbin/docker-prepare.sh
|
||||
}
|
||||
|
||||
install_languages() {
|
||||
echo "Installing languages..."
|
||||
|
||||
local langs="$1"
|
||||
read -ra langs <<<"$langs"
|
||||
|
||||
# Check that it is not empty
|
||||
if [ ${#langs[@]} -eq 0 ]; then
|
||||
return
|
||||
fi
|
||||
apt-get update
|
||||
|
||||
for lang in "${langs[@]}"; do
|
||||
pkg="tesseract-ocr-$lang"
|
||||
# English is installed by default
|
||||
#if [[ "$lang" == "eng" ]]; then
|
||||
# continue
|
||||
#fi
|
||||
|
||||
if dpkg -s "$pkg" &>/dev/null; then
|
||||
echo "Package $pkg already installed!"
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! apt-cache show "$pkg" &>/dev/null; then
|
||||
echo "Package $pkg not found! :("
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "Installing package $pkg..."
|
||||
if ! apt-get -y install "$pkg" &>/dev/null; then
|
||||
echo "Could not install $pkg"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
echo "Paperless-ngx docker container starting..."
|
||||
|
||||
gosu_cmd=(gosu paperless)
|
||||
if [ $(id -u) == $(id -u paperless) ]; then
|
||||
gosu_cmd=()
|
||||
fi
|
||||
|
||||
# Install additional languages if specified
|
||||
if [[ -n "$PAPERLESS_OCR_LANGUAGES" ]]; then
|
||||
install_languages "$PAPERLESS_OCR_LANGUAGES"
|
||||
fi
|
||||
|
||||
initialize
|
||||
|
||||
if [[ "$1" != "/"* ]]; then
|
||||
echo Executing management command "$@"
|
||||
exec ${gosu_cmd[@]} python3 manage.py "$@"
|
||||
else
|
||||
echo Executing "$@"
|
||||
exec "$@"
|
||||
fi
|
@@ -1,83 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
wait_for_postgres() {
|
||||
local attempt_num=1
|
||||
local max_attempts=5
|
||||
|
||||
echo "Waiting for PostgreSQL to start..."
|
||||
|
||||
local host="${PAPERLESS_DBHOST:-localhost}"
|
||||
local port="${PAPERLESS_DBPORT:-5432}"
|
||||
|
||||
# Disable warning, host and port can't have spaces
|
||||
# shellcheck disable=SC2086
|
||||
while [ ! "$(pg_isready -h ${host} -p ${port})" ]; do
|
||||
|
||||
if [ $attempt_num -eq $max_attempts ]; then
|
||||
echo "Unable to connect to database."
|
||||
exit 1
|
||||
else
|
||||
echo "Attempt $attempt_num failed! Trying again in 5 seconds..."
|
||||
|
||||
fi
|
||||
|
||||
attempt_num=$(("$attempt_num" + 1))
|
||||
sleep 5
|
||||
done
|
||||
}
|
||||
|
||||
wait_for_redis() {
|
||||
# We use a Python script to send the Redis ping
|
||||
# instead of installing redis-tools just for 1 thing
|
||||
if ! python3 /sbin/wait-for-redis.py; then
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
migrations() {
|
||||
(
|
||||
# flock is in place to prevent multiple containers from doing migrations
|
||||
# simultaneously. This also ensures that the db is ready when the command
|
||||
# of the current container starts.
|
||||
flock 200
|
||||
echo "Apply database migrations..."
|
||||
python3 manage.py migrate
|
||||
) 200>"${DATA_DIR}/migration_lock"
|
||||
}
|
||||
|
||||
search_index() {
|
||||
|
||||
local index_version=1
|
||||
local index_version_file=${DATA_DIR}/.index_version
|
||||
|
||||
if [[ (! -f "${index_version_file}") || $(<"${index_version_file}") != "$index_version" ]]; then
|
||||
echo "Search index out of date. Updating..."
|
||||
python3 manage.py document_index reindex --no-progress-bar
|
||||
echo ${index_version} | tee "${index_version_file}" >/dev/null
|
||||
fi
|
||||
}
|
||||
|
||||
superuser() {
|
||||
if [[ -n "${PAPERLESS_ADMIN_USER}" ]]; then
|
||||
python3 manage.py manage_superuser
|
||||
fi
|
||||
}
|
||||
|
||||
do_work() {
|
||||
if [[ -n "${PAPERLESS_DBHOST}" ]]; then
|
||||
wait_for_postgres
|
||||
fi
|
||||
|
||||
wait_for_redis
|
||||
|
||||
migrations
|
||||
|
||||
search_index
|
||||
|
||||
superuser
|
||||
|
||||
}
|
||||
|
||||
do_work
|
@@ -1,96 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE policymap [
|
||||
<!ELEMENT policymap (policy)+>
|
||||
<!ATTLIST policymap xmlns CDATA #FIXED ''>
|
||||
<!ELEMENT policy EMPTY>
|
||||
<!ATTLIST policy xmlns CDATA #FIXED '' domain NMTOKEN #REQUIRED
|
||||
name NMTOKEN #IMPLIED pattern CDATA #IMPLIED rights NMTOKEN #IMPLIED
|
||||
stealth NMTOKEN #IMPLIED value CDATA #IMPLIED>
|
||||
]>
|
||||
<!--
|
||||
Configure ImageMagick policies.
|
||||
|
||||
Domains include system, delegate, coder, filter, path, or resource.
|
||||
|
||||
Rights include none, read, write, execute and all. Use | to combine them,
|
||||
for example: "read | write" to permit read from, or write to, a path.
|
||||
|
||||
Use a glob expression as a pattern.
|
||||
|
||||
Suppose we do not want users to process MPEG video images:
|
||||
|
||||
<policy domain="delegate" rights="none" pattern="mpeg:decode" />
|
||||
|
||||
Here we do not want users reading images from HTTP:
|
||||
|
||||
<policy domain="coder" rights="none" pattern="HTTP" />
|
||||
|
||||
The /repository file system is restricted to read only. We use a glob
|
||||
expression to match all paths that start with /repository:
|
||||
|
||||
<policy domain="path" rights="read" pattern="/repository/*" />
|
||||
|
||||
Lets prevent users from executing any image filters:
|
||||
|
||||
<policy domain="filter" rights="none" pattern="*" />
|
||||
|
||||
Any large image is cached to disk rather than memory:
|
||||
|
||||
<policy domain="resource" name="area" value="1GP"/>
|
||||
|
||||
Define arguments for the memory, map, area, width, height and disk resources
|
||||
with SI prefixes (.e.g 100MB). In addition, resource policies are maximums
|
||||
for each instance of ImageMagick (e.g. policy memory limit 1GB, -limit 2GB
|
||||
exceeds policy maximum so memory limit is 1GB).
|
||||
|
||||
Rules are processed in order. Here we want to restrict ImageMagick to only
|
||||
read or write a small subset of proven web-safe image types:
|
||||
|
||||
<policy domain="delegate" rights="none" pattern="*" />
|
||||
<policy domain="filter" rights="none" pattern="*" />
|
||||
<policy domain="coder" rights="none" pattern="*" />
|
||||
<policy domain="coder" rights="read|write" pattern="{GIF,JPEG,PNG,WEBP}" />
|
||||
-->
|
||||
<policymap>
|
||||
<!-- <policy domain="system" name="shred" value="2"/> -->
|
||||
<!-- <policy domain="system" name="precision" value="6"/> -->
|
||||
<!-- <policy domain="system" name="memory-map" value="anonymous"/> -->
|
||||
<!-- <policy domain="system" name="max-memory-request" value="256MiB"/> -->
|
||||
<!-- <policy domain="resource" name="temporary-path" value="/tmp"/> -->
|
||||
<policy domain="resource" name="memory" value="256MiB"/>
|
||||
<policy domain="resource" name="map" value="512MiB"/>
|
||||
<policy domain="resource" name="width" value="16KP"/>
|
||||
<policy domain="resource" name="height" value="16KP"/>
|
||||
<!-- <policy domain="resource" name="list-length" value="128"/> -->
|
||||
<policy domain="resource" name="area" value="128MB"/>
|
||||
<policy domain="resource" name="disk" value="1GiB"/>
|
||||
<!-- <policy domain="resource" name="file" value="768"/> -->
|
||||
<!-- <policy domain="resource" name="thread" value="4"/> -->
|
||||
<!-- <policy domain="resource" name="throttle" value="0"/> -->
|
||||
<!-- <policy domain="resource" name="time" value="3600"/> -->
|
||||
<!-- <policy domain="coder" rights="none" pattern="MVG" /> -->
|
||||
<!-- <policy domain="module" rights="none" pattern="{PS,PDF,XPS}" /> -->
|
||||
<!-- <policy domain="delegate" rights="none" pattern="HTTPS" /> -->
|
||||
<!-- <policy domain="path" rights="none" pattern="@*" /> -->
|
||||
<!-- <policy domain="cache" name="memory-map" value="anonymous"/> -->
|
||||
<!-- <policy domain="cache" name="synchronize" value="True"/> -->
|
||||
<!-- <policy domain="cache" name="shared-secret" value="passphrase" stealth="true"/> -->
|
||||
<!-- <policy domain="system" name="pixel-cache-memory" value="anonymous"/> -->
|
||||
<!-- <policy domain="system" name="shred" value="2"/> -->
|
||||
<!-- <policy domain="system" name="precision" value="6"/> -->
|
||||
<!-- not needed due to the need to use explicitly by mvg: -->
|
||||
<!-- <policy domain="delegate" rights="none" pattern="MVG" /> -->
|
||||
<!-- use curl -->
|
||||
<policy domain="delegate" rights="none" pattern="URL" />
|
||||
<policy domain="delegate" rights="none" pattern="HTTPS" />
|
||||
<policy domain="delegate" rights="none" pattern="HTTP" />
|
||||
<!-- in order to avoid to get image with password text -->
|
||||
<policy domain="path" rights="none" pattern="@*"/>
|
||||
<!-- disable ghostscript format types -->
|
||||
<policy domain="coder" rights="none" pattern="PS" />
|
||||
<policy domain="coder" rights="none" pattern="PS2" />
|
||||
<policy domain="coder" rights="none" pattern="PS3" />
|
||||
<policy domain="coder" rights="none" pattern="EPS" />
|
||||
<policy domain="coder" rights="read|write" pattern="PDF" />
|
||||
<policy domain="coder" rights="none" pattern="XPS" />
|
||||
</policymap>
|
@@ -1,21 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -eu
|
||||
|
||||
for command in decrypt_documents \
|
||||
document_archiver \
|
||||
document_exporter \
|
||||
document_importer \
|
||||
mail_fetcher \
|
||||
document_create_classifier \
|
||||
document_index \
|
||||
document_renamer \
|
||||
document_retagger \
|
||||
document_thumbnails \
|
||||
document_sanity_checker \
|
||||
manage_superuser;
|
||||
do
|
||||
echo "installing $command..."
|
||||
sed "s/management_command/$command/g" management_script.sh > /usr/local/bin/$command
|
||||
chmod +x /usr/local/bin/$command
|
||||
done
|
@@ -1,15 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
cd /usr/src/paperless/src/
|
||||
|
||||
if [[ $(id -u) == 0 ]] ;
|
||||
then
|
||||
gosu paperless python3 manage.py management_command "$@"
|
||||
elif [[ $(id -un) == "paperless" ]] ;
|
||||
then
|
||||
python3 manage.py management_command "$@"
|
||||
else
|
||||
echo "Unknown user."
|
||||
fi
|
@@ -1,15 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
rootless_args=()
|
||||
if [ $(id -u) == $(id -u paperless) ]; then
|
||||
rootless_args=(
|
||||
--user
|
||||
paperless
|
||||
--logfile
|
||||
supervisord.log
|
||||
--pidfile
|
||||
supervisord.pid
|
||||
)
|
||||
fi
|
||||
|
||||
/usr/local/bin/supervisord -c /etc/supervisord.conf ${rootless_args[@]}
|
@@ -1,36 +0,0 @@
|
||||
[supervisord]
|
||||
nodaemon=true ; start in foreground if true; default false
|
||||
logfile=/var/log/supervisord/supervisord.log ; main log file; default $CWD/supervisord.log
|
||||
pidfile=/var/run/supervisord/supervisord.pid ; supervisord pidfile; default supervisord.pid
|
||||
logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
|
||||
logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
|
||||
loglevel=info ; log level; default info; others: debug,warn,trace
|
||||
user=root
|
||||
|
||||
[program:gunicorn]
|
||||
command=gunicorn -c /usr/src/paperless/gunicorn.conf.py paperless.asgi:application
|
||||
user=paperless
|
||||
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:consumer]
|
||||
command=python3 manage.py document_consumer
|
||||
user=paperless
|
||||
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
|
||||
[program:scheduler]
|
||||
command=python3 manage.py qcluster
|
||||
user=paperless
|
||||
stopasgroup = true
|
||||
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
@@ -1,44 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple script which attempts to ping the Redis broker as set in the environment for
|
||||
a certain number of times, waiting a little bit in between
|
||||
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from typing import Final
|
||||
|
||||
from redis import Redis
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
MAX_RETRY_COUNT: Final[int] = 5
|
||||
RETRY_SLEEP_SECONDS: Final[int] = 5
|
||||
|
||||
REDIS_URL: Final[str] = os.getenv("PAPERLESS_REDIS", "redis://localhost:6379")
|
||||
|
||||
print(f"Waiting for Redis: {REDIS_URL}", flush=True)
|
||||
|
||||
attempt = 0
|
||||
with Redis.from_url(url=REDIS_URL) as client:
|
||||
while attempt < MAX_RETRY_COUNT:
|
||||
try:
|
||||
client.ping()
|
||||
break
|
||||
except Exception as e:
|
||||
print(
|
||||
f"Redis ping #{attempt} failed.\n"
|
||||
f"Error: {str(e)}.\n"
|
||||
f"Waiting {RETRY_SLEEP_SECONDS}s",
|
||||
flush=True,
|
||||
)
|
||||
time.sleep(RETRY_SLEEP_SECONDS)
|
||||
attempt += 1
|
||||
|
||||
if attempt >= MAX_RETRY_COUNT:
|
||||
print(f"Failed to connect to: {REDIS_URL}")
|
||||
sys.exit(os.EX_UNAVAILABLE)
|
||||
else:
|
||||
print(f"Connected to Redis broker: {REDIS_URL}")
|
||||
sys.exit(os.EX_OK)
|
@@ -1,10 +1,11 @@
|
||||
FROM python:3.5.1
|
||||
MAINTAINER Pit Kleyersburg <pitkley@googlemail.com>
|
||||
|
||||
# Install Sphinx and Pygments
|
||||
RUN pip install --no-cache-dir Sphinx Pygments \
|
||||
# Setup directories, copy data
|
||||
&& mkdir /build
|
||||
RUN pip install Sphinx Pygments
|
||||
|
||||
# Setup directories, copy data
|
||||
RUN mkdir /build
|
||||
COPY . /build
|
||||
WORKDIR /build/docs
|
||||
|
||||
|
@@ -24,7 +24,6 @@ I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " livehtml to preview changes with live reload in your browser"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@@ -55,9 +54,6 @@ html:
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
livehtml:
|
||||
sphinx-autobuild "./" "$(BUILDDIR)" $(O)
|
||||
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
|
596
docs/_static/css/custom.css
vendored
@@ -1,596 +0,0 @@
|
||||
/* Variables */
|
||||
:root {
|
||||
--color-text-body: #5c5962;
|
||||
--color-text-body-light: #fcfcfc;
|
||||
--color-text-anchor: #7253ed;
|
||||
--color-text-alt: rgba(0, 0, 0, 0.3);
|
||||
--color-text-title: #27262b;
|
||||
--color-text-code-inline: #e74c3c;
|
||||
--color-text-code-nt: #062873;
|
||||
--color-text-selection: #b19eff;
|
||||
--color-bg-body: #fcfcfc;
|
||||
--color-bg-body-alt: #f3f6f6;
|
||||
--color-bg-side-nav: #f5f6fa;
|
||||
--color-bg-side-nav-hover: #ebedf5;
|
||||
--color-bg-code-block: var(--color-bg-side-nav);
|
||||
--color-border: #eeebee;
|
||||
--color-btn-neutral-bg: #f3f6f6;
|
||||
--color-btn-neutral-bg-hover: #e5ebeb;
|
||||
--color-success-title: #1abc9c;
|
||||
--color-success-body: #dbfaf4;
|
||||
--color-warning-title: #f0b37e;
|
||||
--color-warning-body: #ffedcc;
|
||||
--color-danger-title: #f29f97;
|
||||
--color-danger-body: #fdf3f2;
|
||||
--color-info-title: #6ab0de;
|
||||
--color-info-body: #e7f2fa;
|
||||
}
|
||||
|
||||
.dark-mode {
|
||||
--color-text-body: #abb2bf;
|
||||
--color-text-body-light: #9499a2;
|
||||
--color-text-alt: rgba(0255, 255, 255, 0.5);
|
||||
--color-text-title: var(--color-text-anchor);
|
||||
--color-text-code-inline: #abb2bf;
|
||||
--color-text-code-nt: #2063f3;
|
||||
--color-text-selection: #030303;
|
||||
--color-bg-body: #1d1d20 !important;
|
||||
--color-bg-body-alt: #131315;
|
||||
--color-bg-side-nav: #18181a;
|
||||
--color-bg-side-nav-hover: #101216;
|
||||
--color-bg-code-block: #101216;
|
||||
--color-border: #47494f;
|
||||
--color-btn-neutral-bg: #242529;
|
||||
--color-btn-neutral-bg-hover: #101216;
|
||||
--color-success-title: #02120f;
|
||||
--color-success-body: #041b17;
|
||||
--color-warning-title: #1b0e03;
|
||||
--color-warning-body: #371d06;
|
||||
--color-danger-title: #120902;
|
||||
--color-danger-body: #1b0503;
|
||||
--color-info-title: #020608;
|
||||
--color-info-body: #06141e;
|
||||
}
|
||||
|
||||
* {
|
||||
transition: background-color 0.3s ease, border-color 0.3s ease;
|
||||
}
|
||||
|
||||
/* Typography */
|
||||
body {
|
||||
font-family: system-ui,-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,sans-serif;
|
||||
font-size: inherit;
|
||||
line-height: 1.4;
|
||||
color: var(--color-text-body);
|
||||
}
|
||||
|
||||
.rst-content p {
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5, h6 {
|
||||
font-family: inherit;
|
||||
}
|
||||
|
||||
.rst-content .toctree-wrapper>p.caption, .rst-content h1, .rst-content h2, .rst-content h3, .rst-content h4, .rst-content h5, .rst-content h6 {
|
||||
padding-top: .5em;
|
||||
}
|
||||
|
||||
p, .main-content-wrap, .rst-content .section ul, .rst-content .toctree-wrapper ul, .rst-content section ul, .wy-plain-list-disc, article ul {
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
pre, .code, .rst-content .linenodiv pre, .rst-content div[class^=highlight] pre, .rst-content pre.literal-block {
|
||||
font-family: "SFMono-Regular", Menlo,Consolas, Monospace;
|
||||
font-size: 0.75em;
|
||||
line-height: 1.8;
|
||||
}
|
||||
|
||||
.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4 {
|
||||
font-size: 1rem
|
||||
}
|
||||
|
||||
.rst-versions {
|
||||
font-family: inherit;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
footer, footer p {
|
||||
font-size: .8rem;
|
||||
}
|
||||
|
||||
footer .rst-footer-buttons {
|
||||
font-size: 1rem;
|
||||
}
|
||||
|
||||
@media (max-width: 400px) {
|
||||
/* break code lines on mobile */
|
||||
pre, code {
|
||||
word-break: break-word;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/* Layout */
|
||||
.wy-side-nav-search, .wy-menu-vertical {
|
||||
width: auto;
|
||||
}
|
||||
|
||||
.wy-nav-side {
|
||||
z-index: 0;
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
background-color: var(--color-bg-side-nav);
|
||||
}
|
||||
|
||||
.wy-side-scroll {
|
||||
width: 100%;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
@media (min-width: 66.5rem) {
|
||||
.wy-side-scroll {
|
||||
width:264px
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 50rem) {
|
||||
.wy-nav-side {
|
||||
flex-wrap: nowrap;
|
||||
position: fixed;
|
||||
width: 248px;
|
||||
height: 100%;
|
||||
flex-direction: column;
|
||||
border-right: 1px solid var(--color-border);
|
||||
align-items:flex-end
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 66.5rem) {
|
||||
.wy-nav-side {
|
||||
width: calc((100% - 1064px) / 2 + 264px);
|
||||
min-width:264px
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 50rem) {
|
||||
.wy-nav-content-wrap {
|
||||
position: relative;
|
||||
max-width: 800px;
|
||||
margin-left:248px
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 66.5rem) {
|
||||
.wy-nav-content-wrap {
|
||||
margin-left:calc((100% - 1064px) / 2 + 264px)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/* Colors */
|
||||
body.wy-body-for-nav,
|
||||
.wy-nav-content {
|
||||
background: var(--color-bg-body);
|
||||
}
|
||||
|
||||
.wy-nav-side {
|
||||
border-right: 1px solid var(--color-border);
|
||||
}
|
||||
|
||||
.wy-side-nav-search, .wy-nav-top {
|
||||
background: var(--color-bg-side-nav);
|
||||
border-bottom: 1px solid var(--color-border);
|
||||
}
|
||||
|
||||
.wy-nav-content-wrap {
|
||||
background: inherit;
|
||||
}
|
||||
|
||||
.wy-side-nav-search > a, .wy-nav-top a, .wy-nav-top i {
|
||||
color: var(--color-text-title);
|
||||
}
|
||||
|
||||
.wy-side-nav-search > a:hover, .wy-nav-top a:hover {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.wy-side-nav-search > div.version {
|
||||
color: var(--color-text-alt)
|
||||
}
|
||||
|
||||
.wy-side-nav-search > div[role="search"] {
|
||||
border-top: 1px solid var(--color-border);
|
||||
}
|
||||
|
||||
.wy-menu-vertical li.toctree-l2.current>a, .wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,
|
||||
.wy-menu-vertical li.toctree-l3.current>a, .wy-menu-vertical li.toctree-l3.current li.toctree-l4>a {
|
||||
background: var(--color-bg-side-nav);
|
||||
}
|
||||
|
||||
.rst-content .highlighted {
|
||||
background: #eedd85;
|
||||
box-shadow: 0 0 0 2px #eedd85;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.wy-side-nav-search input[type=text],
|
||||
html.writer-html5 .rst-content table.docutils th {
|
||||
color: var(--color-text-body);
|
||||
}
|
||||
|
||||
.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,
|
||||
.wy-table-backed,
|
||||
.wy-table-odd td,
|
||||
.wy-table-striped tr:nth-child(2n-1) td {
|
||||
background-color: var(--color-bg-body-alt);
|
||||
}
|
||||
|
||||
.rst-content table.docutils,
|
||||
.wy-table-bordered-all,
|
||||
html.writer-html5 .rst-content table.docutils th,
|
||||
.rst-content table.docutils td,
|
||||
.wy-table-bordered-all td,
|
||||
hr {
|
||||
border-color: var(--color-border) !important;
|
||||
}
|
||||
|
||||
::selection {
|
||||
background: var(--color-text-selection);
|
||||
}
|
||||
|
||||
/* Ridiculous rules are taken from sphinx_rtd */
|
||||
.rst-content .admonition-title,
|
||||
.wy-alert-title {
|
||||
color: var(--color-text-body-light);
|
||||
}
|
||||
|
||||
.rst-content .hint,
|
||||
.rst-content .important,
|
||||
.rst-content .tip,
|
||||
.rst-content .wy-alert-success,
|
||||
.wy-alert.wy-alert-success {
|
||||
background: var(--color-success-body);
|
||||
}
|
||||
|
||||
.rst-content .hint .admonition-title,
|
||||
.rst-content .hint .wy-alert-title,
|
||||
.rst-content .important .admonition-title,
|
||||
.rst-content .important .wy-alert-title,
|
||||
.rst-content .tip .admonition-title,
|
||||
.rst-content .tip .wy-alert-title,
|
||||
.rst-content .wy-alert-success .admonition-title,
|
||||
.rst-content .wy-alert-success .wy-alert-title,
|
||||
.wy-alert.wy-alert-success .rst-content .admonition-title,
|
||||
.wy-alert.wy-alert-success .wy-alert-title {
|
||||
background-color: var(--color-success-title);
|
||||
}
|
||||
|
||||
.rst-content .admonition-todo,
|
||||
.rst-content .attention,
|
||||
.rst-content .caution,
|
||||
.rst-content .warning,
|
||||
.rst-content .wy-alert-warning,
|
||||
.wy-alert.wy-alert-warning {
|
||||
background: var(--color-warning-body);
|
||||
}
|
||||
|
||||
.rst-content .admonition-todo .admonition-title,
|
||||
.rst-content .admonition-todo .wy-alert-title,
|
||||
.rst-content .attention .admonition-title,
|
||||
.rst-content .attention .wy-alert-title,
|
||||
.rst-content .caution .admonition-title,
|
||||
.rst-content .caution .wy-alert-title,
|
||||
.rst-content .warning .admonition-title,
|
||||
.rst-content .warning .wy-alert-title,
|
||||
.rst-content .wy-alert-warning .admonition-title,
|
||||
.rst-content .wy-alert-warning .wy-alert-title,
|
||||
.rst-content .wy-alert.wy-alert-warning .admonition-title,
|
||||
.wy-alert.wy-alert-warning .rst-content .admonition-title,
|
||||
.wy-alert.wy-alert-warning .wy-alert-title {
|
||||
background: var(--color-warning-title);
|
||||
}
|
||||
|
||||
.rst-content .danger,
|
||||
.rst-content .error,
|
||||
.rst-content .wy-alert-danger,
|
||||
.wy-alert.wy-alert-danger {
|
||||
background: var(--color-danger-body);
|
||||
}
|
||||
|
||||
.rst-content .danger .admonition-title,
|
||||
.rst-content .danger .wy-alert-title,
|
||||
.rst-content .error .admonition-title,
|
||||
.rst-content .error .wy-alert-title,
|
||||
.rst-content .wy-alert-danger .admonition-title,
|
||||
.rst-content .wy-alert-danger .wy-alert-title,
|
||||
.wy-alert.wy-alert-danger .rst-content .admonition-title,
|
||||
.wy-alert.wy-alert-danger .wy-alert-title {
|
||||
background: var(--color-danger-title);
|
||||
}
|
||||
|
||||
.rst-content .note,
|
||||
.rst-content .seealso,
|
||||
.rst-content .wy-alert-info,
|
||||
.wy-alert.wy-alert-info {
|
||||
background: var(--color-info-body);
|
||||
}
|
||||
|
||||
.rst-content .note .admonition-title,
|
||||
.rst-content .note .wy-alert-title,
|
||||
.rst-content .seealso .admonition-title,
|
||||
.rst-content .seealso .wy-alert-title,
|
||||
.rst-content .wy-alert-info .admonition-title,
|
||||
.rst-content .wy-alert-info .wy-alert-title,
|
||||
.wy-alert.wy-alert-info .rst-content .admonition-title,
|
||||
.wy-alert.wy-alert-info .wy-alert-title {
|
||||
background: var(--color-info-title);
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* Links */
|
||||
a, a:visited,
|
||||
.wy-menu-vertical a,
|
||||
a.icon.icon-home,
|
||||
.wy-menu-vertical li.toctree-l1.current > a.current {
|
||||
color: var(--color-text-anchor);
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:hover, .wy-breadcrumbs-aside a {
|
||||
color: var(--color-text-anchor); /* reset */
|
||||
}
|
||||
|
||||
.rst-versions a, .rst-versions .rst-current-version {
|
||||
color: #var(--color-text-anchor);
|
||||
}
|
||||
|
||||
.wy-nav-content a.reference, .wy-nav-content a:not([class]) {
|
||||
background-image: linear-gradient(var(--color-border) 0%, var(--color-border) 100%);
|
||||
background-repeat: repeat-x;
|
||||
background-position: 0 100%;
|
||||
background-size: 1px 1px;
|
||||
}
|
||||
|
||||
.wy-nav-content a.reference:hover, .wy-nav-content a:not([class]):hover {
|
||||
background-image: linear-gradient(rgba(114,83,237,0.45) 0%, rgba(114,83,237,0.45) 100%);
|
||||
background-size: 1px 1px;
|
||||
}
|
||||
|
||||
.wy-menu-vertical a:hover,
|
||||
.wy-menu-vertical li.current a:hover,
|
||||
.wy-menu-vertical a:active {
|
||||
background: var(--color-bg-side-nav-hover) !important;
|
||||
color: var(--color-text-body);
|
||||
}
|
||||
|
||||
.wy-menu-vertical li.toctree-l1.current>a,
|
||||
.wy-menu-vertical li.current>a,
|
||||
.wy-menu-vertical li.on a {
|
||||
background-color: var(--color-bg-side-nav-hover);
|
||||
border: none;
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
.wy-menu-vertical li.current {
|
||||
background-color: inherit;
|
||||
}
|
||||
|
||||
.wy-menu-vertical li.current a {
|
||||
border-right: none;
|
||||
}
|
||||
|
||||
.wy-menu-vertical li.toctree-l2 a,
|
||||
.wy-menu-vertical li.toctree-l3 a,
|
||||
.wy-menu-vertical li.toctree-l4 a,
|
||||
.wy-menu-vertical li.toctree-l5 a,
|
||||
.wy-menu-vertical li.toctree-l6 a,
|
||||
.wy-menu-vertical li.toctree-l7 a,
|
||||
.wy-menu-vertical li.toctree-l8 a,
|
||||
.wy-menu-vertical li.toctree-l9 a,
|
||||
.wy-menu-vertical li.toctree-l10 a {
|
||||
color: var(--color-text-body);
|
||||
}
|
||||
|
||||
a.image-reference, a.image-reference:hover {
|
||||
background: none !important;
|
||||
}
|
||||
|
||||
a.image-reference img {
|
||||
cursor: zoom-in;
|
||||
}
|
||||
|
||||
|
||||
/* Code blocks */
|
||||
.rst-content code, .rst-content tt, code {
|
||||
padding: 0.25em;
|
||||
font-weight: 400;
|
||||
background-color: var(--color-bg-code-block);
|
||||
border: 1px solid var(--color-border);
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.rst-content div[class^=highlight], .rst-content pre.literal-block {
|
||||
padding: 0.7rem;
|
||||
margin-top: 0;
|
||||
margin-bottom: 0.75rem;
|
||||
overflow-x: auto;
|
||||
background-color: var(--color-bg-side-nav);
|
||||
border-color: var(--color-border);
|
||||
border-radius: 4px;
|
||||
box-shadow: none;
|
||||
}
|
||||
|
||||
.rst-content .admonition-title,
|
||||
.rst-content div.admonition,
|
||||
.wy-alert-title {
|
||||
padding: 10px 12px;
|
||||
border-top-left-radius: 4px;
|
||||
border-top-right-radius: 4px;
|
||||
}
|
||||
|
||||
.highlight .go {
|
||||
color: inherit;
|
||||
}
|
||||
|
||||
.highlight .nt {
|
||||
color: var(--color-text-code-nt);
|
||||
}
|
||||
|
||||
.rst-content code.literal,
|
||||
.rst-content tt.literal {
|
||||
border-color: var(--color-border);
|
||||
background-color: var(--color-border);
|
||||
color: var(--color-text-code-inline)
|
||||
}
|
||||
|
||||
|
||||
/* Search */
|
||||
.wy-side-nav-search input[type=text] {
|
||||
border: none;
|
||||
border-radius: 0;
|
||||
background-color: transparent;
|
||||
font-family: inherit;
|
||||
font-size: .85rem;
|
||||
box-shadow: none;
|
||||
padding: .7rem 1rem .7rem 2.8rem;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
#rtd-search-form {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
#rtd-search-form:before {
|
||||
font: normal normal normal 14px/1 FontAwesome;
|
||||
font-size: inherit;
|
||||
text-rendering: auto;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
content: "\f002";
|
||||
color: var(--color-text-alt);
|
||||
position: absolute;
|
||||
left: 1.5rem;
|
||||
top: .7rem;
|
||||
}
|
||||
|
||||
/* Side nav */
|
||||
.wy-side-nav-search {
|
||||
padding: 1rem 0 0 0;
|
||||
}
|
||||
|
||||
.wy-menu-vertical li a button.toctree-expand {
|
||||
float: right;
|
||||
margin-right: -1.5em;
|
||||
padding: 0 .5em;
|
||||
}
|
||||
|
||||
.wy-menu-vertical a,
|
||||
.wy-menu-vertical li.current>a,
|
||||
.wy-menu-vertical li.current li>a {
|
||||
padding-right: 1.5em !important;
|
||||
}
|
||||
|
||||
.wy-menu-vertical li.current li>a.current {
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
/* Misc spacing */
|
||||
.rst-content .admonition-title, .wy-alert-title {
|
||||
padding: 10px 12px;
|
||||
}
|
||||
|
||||
/* Buttons */
|
||||
.btn {
|
||||
display: inline-block;
|
||||
box-sizing: border-box;
|
||||
padding: 0.3em 1em;
|
||||
margin: 0;
|
||||
font-family: inherit;
|
||||
font-size: inherit;
|
||||
font-weight: 500;
|
||||
line-height: 1.5;
|
||||
color: #var(--color-text-anchor);
|
||||
text-decoration: none;
|
||||
vertical-align: baseline;
|
||||
background-color: #f7f7f7;
|
||||
border-width: 0;
|
||||
border-radius: 4px;
|
||||
box-shadow: 0 1px 2px rgba(0,0,0,0.12),0 3px 10px rgba(0,0,0,0.08);
|
||||
appearance: none;
|
||||
}
|
||||
|
||||
.btn:active {
|
||||
padding: 0.3em 1em;
|
||||
}
|
||||
|
||||
.rst-content .btn:focus {
|
||||
outline: 1px solid #ccc;
|
||||
}
|
||||
|
||||
.rst-content .btn-neutral, .rst-content .btn span.fa {
|
||||
color: var(--color-text-body) !important;
|
||||
}
|
||||
|
||||
.btn-neutral {
|
||||
background-color: var(--color-btn-neutral-bg) !important;
|
||||
color: var(--color-btn-neutral-text) !important;
|
||||
border: 1px solid var(--color-btn-neutral-bg);
|
||||
}
|
||||
|
||||
.btn:hover, .btn-neutral:hover {
|
||||
background-color: var(--color-btn-neutral-bg-hover) !important;
|
||||
}
|
||||
|
||||
|
||||
/* Icon overrides */
|
||||
.wy-side-nav-search a.icon-home:before {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before {
|
||||
content: "\f106"; /* fa-angle-up */
|
||||
}
|
||||
|
||||
.fa-plus-square-o:before, .wy-menu-vertical li button.toctree-expand:before {
|
||||
content: "\f107"; /* fa-angle-down */
|
||||
}
|
||||
|
||||
|
||||
/* Misc */
|
||||
.wy-nav-top {
|
||||
line-height: 36px;
|
||||
}
|
||||
|
||||
.wy-nav-top > i {
|
||||
font-size: 24px;
|
||||
padding: 8px 0 0 2px;
|
||||
color:#var(--color-text-anchor);
|
||||
}
|
||||
|
||||
.rst-content table.docutils td,
|
||||
.rst-content table.docutils th,
|
||||
.rst-content table.field-list td,
|
||||
.rst-content table.field-list th,
|
||||
.wy-table td,
|
||||
.wy-table th {
|
||||
padding: 8px 14px;
|
||||
}
|
||||
|
||||
.dark-mode-toggle {
|
||||
position: absolute;
|
||||
top: 14px;
|
||||
right: 12px;
|
||||
height: 20px;
|
||||
width: 24px;
|
||||
z-index: 10;
|
||||
border: none;
|
||||
background-color: transparent;
|
||||
color: inherit;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.wy-nav-content-wrap {
|
||||
z-index: 20;
|
||||
}
|
14
docs/_static/custom.css
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
/* override table width restrictions */
|
||||
@media screen and (min-width: 767px) {
|
||||
|
||||
.wy-table-responsive table td {
|
||||
/* !important prevents the common CSS stylesheets from
|
||||
overriding this as on RTD they are loaded after this stylesheet */
|
||||
white-space: normal !important;
|
||||
}
|
||||
|
||||
.wy-table-responsive {
|
||||
overflow: visible !important;
|
||||
}
|
||||
|
||||
}
|
47
docs/_static/js/darkmode.js
vendored
@@ -1,47 +0,0 @@
|
||||
let toggleButton
|
||||
let icon
|
||||
|
||||
function load() {
|
||||
'use strict'
|
||||
|
||||
toggleButton = document.createElement('button')
|
||||
toggleButton.setAttribute('title', 'Toggle dark mode')
|
||||
toggleButton.classList.add('dark-mode-toggle')
|
||||
icon = document.createElement('i')
|
||||
icon.classList.add('fa', darkModeState ? 'fa-sun-o' : 'fa-moon-o')
|
||||
toggleButton.appendChild(icon)
|
||||
document.body.prepend(toggleButton)
|
||||
|
||||
// Listen for changes in the OS settings
|
||||
// addListener is used because older versions of Safari don't support addEventListener
|
||||
// prefersDarkQuery set in <head>
|
||||
if (prefersDarkQuery) {
|
||||
prefersDarkQuery.addListener(function (evt) {
|
||||
toggleDarkMode(evt.matches)
|
||||
})
|
||||
}
|
||||
|
||||
// Initial setting depending on the prefers-color-mode or localstorage
|
||||
// darkModeState should be set in the document <head> to prevent flash
|
||||
if (darkModeState == undefined) darkModeState = false
|
||||
toggleDarkMode(darkModeState)
|
||||
|
||||
// Toggles the "dark-mode" class on click and sets localStorage state
|
||||
toggleButton.addEventListener('click', () => {
|
||||
darkModeState = !darkModeState
|
||||
|
||||
toggleDarkMode(darkModeState)
|
||||
localStorage.setItem('dark-mode', darkModeState)
|
||||
})
|
||||
}
|
||||
|
||||
function toggleDarkMode(state) {
|
||||
document.documentElement.classList.toggle('dark-mode', state)
|
||||
document.documentElement.classList.toggle('light-mode', !state)
|
||||
icon.classList.remove('fa-sun-o')
|
||||
icon.classList.remove('fa-moon-o')
|
||||
icon.classList.add(state ? 'fa-sun-o' : 'fa-moon-o')
|
||||
darkModeState = state
|
||||
}
|
||||
|
||||
document.addEventListener('DOMContentLoaded', load)
|
BIN
docs/_static/recommended_workflow.png
vendored
Before Width: | Height: | Size: 67 KiB |
BIN
docs/_static/screenshot.png
vendored
Normal file
After Width: | Height: | Size: 445 KiB |
BIN
docs/_static/screenshots/bulk-edit.png
vendored
Before Width: | Height: | Size: 661 KiB |
BIN
docs/_static/screenshots/correspondents.png
vendored
Before Width: | Height: | Size: 457 KiB |
BIN
docs/_static/screenshots/dashboard.png
vendored
Before Width: | Height: | Size: 436 KiB |
BIN
docs/_static/screenshots/documents-filter.png
vendored
Before Width: | Height: | Size: 462 KiB |
BIN
docs/_static/screenshots/documents-largecards.png
vendored
Before Width: | Height: | Size: 608 KiB |
Before Width: | Height: | Size: 698 KiB |
BIN
docs/_static/screenshots/documents-smallcards.png
vendored
Before Width: | Height: | Size: 706 KiB |
BIN
docs/_static/screenshots/documents-table.png
vendored
Before Width: | Height: | Size: 480 KiB |
BIN
docs/_static/screenshots/documents-wchrome-dark.png
vendored
Before Width: | Height: | Size: 680 KiB |
BIN
docs/_static/screenshots/documents-wchrome.png
vendored
Before Width: | Height: | Size: 686 KiB |
BIN
docs/_static/screenshots/editing.png
vendored
Before Width: | Height: | Size: 848 KiB |
BIN
docs/_static/screenshots/logs.png
vendored
Before Width: | Height: | Size: 703 KiB |
BIN
docs/_static/screenshots/mail-rules-edited.png
vendored
Before Width: | Height: | Size: 96 KiB |
BIN
docs/_static/screenshots/mobile.png
vendored
Before Width: | Height: | Size: 388 KiB |
BIN
docs/_static/screenshots/new-tag.png
vendored
Before Width: | Height: | Size: 26 KiB |
BIN
docs/_static/screenshots/search-preview.png
vendored
Before Width: | Height: | Size: 54 KiB |
BIN
docs/_static/screenshots/search-results.png
vendored
Before Width: | Height: | Size: 517 KiB |
13
docs/_templates/layout.html
vendored
@@ -1,13 +0,0 @@
|
||||
{% extends "!layout.html" %}
|
||||
{% block extrahead %}
|
||||
<script>
|
||||
// MediaQueryList object
|
||||
const prefersDarkQuery = window.matchMedia("(prefers-color-scheme: dark)");
|
||||
const lsDark = localStorage.getItem("dark-mode");
|
||||
let darkModeState = lsDark !== null ? lsDark == "true" : prefersDarkQuery.matches;
|
||||
|
||||
document.documentElement.classList.toggle("dark-mode", darkModeState);
|
||||
document.documentElement.classList.toggle("light-mode", !darkModeState);
|
||||
</script>
|
||||
{{ super() }}
|
||||
{% endblock %}
|
@@ -1,520 +0,0 @@
|
||||
|
||||
**************
|
||||
Administration
|
||||
**************
|
||||
|
||||
.. _administration-backup:
|
||||
|
||||
Making backups
|
||||
##############
|
||||
|
||||
Multiple options exist for making backups of your paperless instance,
|
||||
depending on how you installed paperless.
|
||||
|
||||
Before making backups, make sure that paperless is not running.
|
||||
|
||||
Options available to any installation of paperless:
|
||||
|
||||
* Use the :ref:`document exporter <utilities-exporter>`.
|
||||
The document exporter exports all your documents, thumbnails and
|
||||
metadata to a specific folder. You may import your documents into a
|
||||
fresh instance of paperless again or store your documents in another
|
||||
DMS with this export.
|
||||
* The document exporter is also able to update an already existing export.
|
||||
Therefore, incremental backups with ``rsync`` are entirely possible.
|
||||
|
||||
.. caution::
|
||||
|
||||
You cannot import the export generated with one version of paperless in a
|
||||
different version of paperless. The export contains an exact image of the
|
||||
database, and migrations may change the database layout.
|
||||
|
||||
Options available to docker installations:
|
||||
|
||||
* Backup the docker volumes. These usually reside within
|
||||
``/var/lib/docker/volumes`` on the host and you need to be root in order
|
||||
to access them.
|
||||
|
||||
Paperless uses 3 volumes:
|
||||
|
||||
* ``paperless_media``: This is where your documents are stored.
|
||||
* ``paperless_data``: This is where auxillary data is stored. This
|
||||
folder also contains the SQLite database, if you use it.
|
||||
* ``paperless_pgdata``: Exists only if you use PostgreSQL and contains
|
||||
the database.
|
||||
|
||||
Options available to bare-metal and non-docker installations:
|
||||
|
||||
* Backup the entire paperless folder. This ensures that if your paperless instance
|
||||
crashes at some point or your disk fails, you can simply copy the folder back
|
||||
into place and it works.
|
||||
|
||||
When using PostgreSQL, you'll also have to backup the database.
|
||||
|
||||
.. _migrating-restoring:
|
||||
|
||||
Restoring
|
||||
=========
|
||||
|
||||
.. _administration-updating:
|
||||
|
||||
Updating Paperless
|
||||
##################
|
||||
|
||||
Docker Route
|
||||
============
|
||||
|
||||
If a new release of paperless-ngx is available, upgrading depends on how you
|
||||
installed paperless-ngx in the first place. The releases are available at the
|
||||
`release page <https://github.com/paperless-ngx/paperless-ngx/releases>`_.
|
||||
|
||||
First of all, ensure that paperless is stopped.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ cd /path/to/paperless
|
||||
$ docker-compose down
|
||||
|
||||
After that, :ref:`make a backup <administration-backup>`.
|
||||
|
||||
A. If you pull the image from the docker hub, all you need to do is:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ docker-compose pull
|
||||
$ docker-compose up
|
||||
|
||||
The docker-compose files refer to the ``latest`` version, which is always the latest
|
||||
stable release.
|
||||
|
||||
B. If you built the image yourself, do the following:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ git pull
|
||||
$ docker-compose build
|
||||
$ docker-compose up
|
||||
|
||||
Running ``docker-compose up`` will also apply any new database migrations.
|
||||
If you see everything working, press CTRL+C once to gracefully stop paperless.
|
||||
Then you can start paperless-ngx with ``-d`` to have it run in the background.
|
||||
|
||||
.. note::
|
||||
|
||||
In version 0.9.14, the update process was changed. In 0.9.13 and earlier, the
|
||||
docker-compose files specified exact versions and pull won't automatically
|
||||
update to newer versions. In order to enable updates as described above, either
|
||||
get the new ``docker-compose.yml`` file from `here <https://github.com/paperless-ngx/paperless-ngx/tree/master/docker/compose>`_
|
||||
or edit the ``docker-compose.yml`` file, find the line that says
|
||||
|
||||
.. code::
|
||||
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:0.9.x
|
||||
|
||||
and replace the version with ``latest``:
|
||||
|
||||
.. code::
|
||||
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
|
||||
.. note::
|
||||
In version 1.7.1 and onwards, the Docker image can now be pinned to a release series.
|
||||
This is often combined with automatic updaters such as Watchtower to allow safer
|
||||
unattended upgrading to new bugfix releases only. It is still recommended to always
|
||||
review release notes before upgrading. To pin your install to a release series, edit
|
||||
the ``docker-compose.yml`` find the line that says
|
||||
|
||||
.. code::
|
||||
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
|
||||
and replace the version with the series you want to track, for example:
|
||||
|
||||
.. code::
|
||||
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:1.7
|
||||
|
||||
Bare Metal Route
|
||||
================
|
||||
|
||||
After grabbing the new release and unpacking the contents, do the following:
|
||||
|
||||
1. Update dependencies. New paperless version may require additional
|
||||
dependencies. The dependencies required are listed in the section about
|
||||
:ref:`bare metal installations <setup-bare_metal>`.
|
||||
|
||||
2. Update python requirements. Keep in mind to activate your virtual environment
|
||||
before that, if you use one.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ pip install -r requirements.txt
|
||||
|
||||
3. Migrate the database.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ cd src
|
||||
$ python3 manage.py migrate
|
||||
|
||||
This might not actually do anything. Not every new paperless version comes with new
|
||||
database migrations.
|
||||
|
||||
Downgrading Paperless
|
||||
#####################
|
||||
|
||||
Downgrades are possible. However, some updates also contain database migrations (these change the layout of the database and may move data).
|
||||
In order to move back from a version that applied database migrations, you'll have to revert the database migration *before* downgrading,
|
||||
and then downgrade paperless.
|
||||
|
||||
This table lists the compatible versions for each database migration number.
|
||||
|
||||
+------------------+-----------------+
|
||||
| Migration number | Version range |
|
||||
+------------------+-----------------+
|
||||
| 1011 | 1.0.0 |
|
||||
+------------------+-----------------+
|
||||
| 1012 | 1.1.0 - 1.2.1 |
|
||||
+------------------+-----------------+
|
||||
| 1014 | 1.3.0 - 1.3.1 |
|
||||
+------------------+-----------------+
|
||||
| 1016 | 1.3.2 - current |
|
||||
+------------------+-----------------+
|
||||
|
||||
Execute the following management command to migrate your database:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ python3 manage.py migrate documents <migration number>
|
||||
|
||||
.. note::
|
||||
|
||||
Some migrations cannot be undone. The command will issue errors if that happens.
|
||||
|
||||
.. _utilities-management-commands:
|
||||
|
||||
Management utilities
|
||||
####################
|
||||
|
||||
Paperless comes with some management commands that perform various maintenance
|
||||
tasks on your paperless instance. You can invoke these commands in the following way:
|
||||
|
||||
With docker-compose, while paperless is running:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ cd /path/to/paperless
|
||||
$ docker-compose exec webserver <command> <arguments>
|
||||
|
||||
With docker, while paperless is running:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ docker exec -it <container-name> <command> <arguments>
|
||||
|
||||
Bare metal:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ cd /path/to/paperless/src
|
||||
$ python3 manage.py <command> <arguments>
|
||||
|
||||
All commands have built-in help, which can be accessed by executing them with
|
||||
the argument ``--help``.
|
||||
|
||||
.. _utilities-exporter:
|
||||
|
||||
Document exporter
|
||||
=================
|
||||
|
||||
The document exporter exports all your data from paperless into a folder for
|
||||
backup or migration to another DMS.
|
||||
|
||||
If you use the document exporter within a cronjob to backup your data you might use the ``-T`` flag behind exec to suppress "The input device is not a TTY" errors. For example: ``docker-compose exec -T webserver document_exporter ../export``
|
||||
|
||||
.. code::
|
||||
|
||||
document_exporter target [-c] [-f] [-d]
|
||||
|
||||
optional arguments:
|
||||
-c, --compare-checksums
|
||||
-f, --use-filename-format
|
||||
-d, --delete
|
||||
|
||||
``target`` is a folder to which the data gets written. This includes documents,
|
||||
thumbnails and a ``manifest.json`` file. The manifest contains all metadata from
|
||||
the database (correspondents, tags, etc).
|
||||
|
||||
When you use the provided docker compose script, specify ``../export`` as the
|
||||
target. This path inside the container is automatically mounted on your host on
|
||||
the folder ``export``.
|
||||
|
||||
If the target directory already exists and contains files, paperless will assume
|
||||
that the contents of the export directory are a previous export and will attempt
|
||||
to update the previous export. Paperless will only export changed and added files.
|
||||
Paperless determines whether a file has changed by inspecting the file attributes
|
||||
"date/time modified" and "size". If that does not work out for you, specify
|
||||
``--compare-checksums`` and paperless will attempt to compare file checksums instead.
|
||||
This is slower.
|
||||
|
||||
Paperless will not remove any existing files in the export directory. If you want
|
||||
paperless to also remove files that do not belong to the current export such as files
|
||||
from deleted documents, specify ``--delete``. Be careful when pointing paperless to
|
||||
a directory that already contains other files.
|
||||
|
||||
The filenames generated by this command follow the format
|
||||
``[date created] [correspondent] [title].[extension]``.
|
||||
If you want paperless to use ``PAPERLESS_FILENAME_FORMAT`` for exported filenames
|
||||
instead, specify ``--use-filename-format``.
|
||||
|
||||
|
||||
.. _utilities-importer:
|
||||
|
||||
Document importer
|
||||
=================
|
||||
|
||||
The document importer takes the export produced by the `Document exporter`_ and
|
||||
imports it into paperless.
|
||||
|
||||
The importer works just like the exporter. You point it at a directory, and
|
||||
the script does the rest of the work:
|
||||
|
||||
.. code::
|
||||
|
||||
document_importer source
|
||||
|
||||
When you use the provided docker compose script, put the export inside the
|
||||
``export`` folder in your paperless source directory. Specify ``../export``
|
||||
as the ``source``.
|
||||
|
||||
.. note::
|
||||
|
||||
Importing from a previous version of Paperless may work, but for best results
|
||||
it is suggested to match the versions.
|
||||
|
||||
.. _utilities-retagger:
|
||||
|
||||
Document retagger
|
||||
=================
|
||||
|
||||
Say you've imported a few hundred documents and now want to introduce
|
||||
a tag or set up a new correspondent, and apply its matching to all of
|
||||
the currently-imported docs. This problem is common enough that
|
||||
there are tools for it.
|
||||
|
||||
.. code::
|
||||
|
||||
document_retagger [-h] [-c] [-T] [-t] [-i] [--use-first] [-f]
|
||||
|
||||
optional arguments:
|
||||
-c, --correspondent
|
||||
-T, --tags
|
||||
-t, --document_type
|
||||
-i, --inbox-only
|
||||
--use-first
|
||||
-f, --overwrite
|
||||
|
||||
Run this after changing or adding matching rules. It'll loop over all
|
||||
of the documents in your database and attempt to match documents
|
||||
according to the new rules.
|
||||
|
||||
Specify any combination of ``-c``, ``-T`` and ``-t`` to have the
|
||||
retagger perform matching of the specified metadata type. If you don't
|
||||
specify any of these options, the document retagger won't do anything.
|
||||
|
||||
Specify ``-i`` to have the document retagger work on documents tagged
|
||||
with inbox tags only. This is useful when you don't want to mess with
|
||||
your already processed documents.
|
||||
|
||||
When multiple document types or correspondents match a single document,
|
||||
the retagger won't assign these to the document. Specify ``--use-first``
|
||||
to override this behavior and just use the first correspondent or type
|
||||
it finds. This option does not apply to tags, since any amount of tags
|
||||
can be applied to a document.
|
||||
|
||||
Finally, ``-f`` specifies that you wish to overwrite already assigned
|
||||
correspondents, types and/or tags. The default behavior is to not
|
||||
assign correspondents and types to documents that have this data already
|
||||
assigned. ``-f`` works differently for tags: By default, only additional tags get
|
||||
added to documents, no tags will be removed. With ``-f``, tags that don't
|
||||
match a document anymore get removed as well.
|
||||
|
||||
|
||||
Managing the Automatic matching algorithm
|
||||
=========================================
|
||||
|
||||
The *Auto* matching algorithm requires a trained neural network to work.
|
||||
This network needs to be updated whenever somethings in your data
|
||||
changes. The docker image takes care of that automatically with the task
|
||||
scheduler. You can manually renew the classifier by invoking the following
|
||||
management command:
|
||||
|
||||
.. code::
|
||||
|
||||
document_create_classifier
|
||||
|
||||
This command takes no arguments.
|
||||
|
||||
.. _`administration-index`:
|
||||
|
||||
Managing the document search index
|
||||
==================================
|
||||
|
||||
The document search index is responsible for delivering search results for the
|
||||
website. The document index is automatically updated whenever documents get
|
||||
added to, changed, or removed from paperless. However, if the search yields
|
||||
non-existing documents or won't find anything, you may need to recreate the
|
||||
index manually.
|
||||
|
||||
.. code::
|
||||
|
||||
document_index {reindex,optimize}
|
||||
|
||||
Specify ``reindex`` to have the index created from scratch. This may take some
|
||||
time.
|
||||
|
||||
Specify ``optimize`` to optimize the index. This updates certain aspects of
|
||||
the index and usually makes queries faster and also ensures that the
|
||||
autocompletion works properly. This command is regularly invoked by the task
|
||||
scheduler.
|
||||
|
||||
.. _utilities-renamer:
|
||||
|
||||
Managing filenames
|
||||
==================
|
||||
|
||||
If you use paperless' feature to
|
||||
:ref:`assign custom filenames to your documents <advanced-file_name_handling>`,
|
||||
you can use this command to move all your files after changing
|
||||
the naming scheme.
|
||||
|
||||
.. warning::
|
||||
|
||||
Since this command moves your documents, it is advised to do
|
||||
a backup beforehand. The renaming logic is robust and will never overwrite
|
||||
or delete a file, but you can't ever be careful enough.
|
||||
|
||||
.. code::
|
||||
|
||||
document_renamer
|
||||
|
||||
The command takes no arguments and processes all your documents at once.
|
||||
|
||||
Learn how to use :ref:`Management Utilities<utilities-management-commands>`.
|
||||
|
||||
|
||||
.. _utilities-sanity-checker:
|
||||
|
||||
Sanity checker
|
||||
==============
|
||||
|
||||
Paperless has a built-in sanity checker that inspects your document collection for issues.
|
||||
|
||||
The issues detected by the sanity checker are as follows:
|
||||
|
||||
* Missing original files.
|
||||
* Missing archive files.
|
||||
* Inaccessible original files due to improper permissions.
|
||||
* Inaccessible archive files due to improper permissions.
|
||||
* Corrupted original documents by comparing their checksum against what is stored in the database.
|
||||
* Corrupted archive documents by comparing their checksum against what is stored in the database.
|
||||
* Missing thumbnails.
|
||||
* Inaccessible thumbnails due to improper permissions.
|
||||
* Documents without any content (warning).
|
||||
* Orphaned files in the media directory (warning). These are files that are not referenced by any document im paperless.
|
||||
|
||||
|
||||
.. code::
|
||||
|
||||
document_sanity_checker
|
||||
|
||||
The command takes no arguments. Depending on the size of your document archive, this may take some time.
|
||||
|
||||
|
||||
Fetching e-mail
|
||||
===============
|
||||
|
||||
Paperless automatically fetches your e-mail every 10 minutes by default. If
|
||||
you want to invoke the email consumer manually, call the following management
|
||||
command:
|
||||
|
||||
.. code::
|
||||
|
||||
mail_fetcher
|
||||
|
||||
The command takes no arguments and processes all your mail accounts and rules.
|
||||
|
||||
.. _utilities-archiver:
|
||||
|
||||
Creating archived documents
|
||||
===========================
|
||||
|
||||
Paperless stores archived PDF/A documents alongside your original documents.
|
||||
These archived documents will also contain selectable text for image-only
|
||||
originals.
|
||||
These documents are derived from the originals, which are always stored
|
||||
unmodified. If coming from an earlier version of paperless, your documents
|
||||
won't have archived versions.
|
||||
|
||||
This command creates PDF/A documents for your documents.
|
||||
|
||||
.. code::
|
||||
|
||||
document_archiver --overwrite --document <id>
|
||||
|
||||
This command will only attempt to create archived documents when no archived
|
||||
document exists yet, unless ``--overwrite`` is specified. If ``--document <id>``
|
||||
is specified, the archiver will only process that document.
|
||||
|
||||
.. note::
|
||||
|
||||
This command essentially performs OCR on all your documents again,
|
||||
according to your settings. If you run this with ``PAPERLESS_OCR_MODE=redo``,
|
||||
it will potentially run for a very long time. You can cancel the command
|
||||
at any time, since this command will skip already archived versions the next time
|
||||
it is run.
|
||||
|
||||
.. note::
|
||||
|
||||
Some documents will cause errors and cannot be converted into PDF/A documents,
|
||||
such as encrypted PDF documents. The archiver will skip over these documents
|
||||
each time it sees them.
|
||||
|
||||
.. _utilities-encyption:
|
||||
|
||||
Managing encryption
|
||||
===================
|
||||
|
||||
Documents can be stored in Paperless using GnuPG encryption.
|
||||
|
||||
.. danger::
|
||||
|
||||
Encryption is deprecated since paperless-ngx 0.9 and doesn't really provide any
|
||||
additional security, since you have to store the passphrase in a configuration
|
||||
file on the same system as the encrypted documents for paperless to work.
|
||||
Furthermore, the entire text content of the documents is stored plain in the
|
||||
database, even if your documents are encrypted. Filenames are not encrypted as
|
||||
well.
|
||||
|
||||
Also, the web server provides transparent access to your encrypted documents.
|
||||
|
||||
Consider running paperless on an encrypted filesystem instead, which will then
|
||||
at least provide security against physical hardware theft.
|
||||
|
||||
|
||||
Enabling encryption
|
||||
-------------------
|
||||
|
||||
Enabling encryption is no longer supported.
|
||||
|
||||
|
||||
Disabling encryption
|
||||
--------------------
|
||||
|
||||
Basic usage to disable encryption of your document store:
|
||||
|
||||
(Note: If ``PAPERLESS_PASSPHRASE`` isn't set already, you need to specify it here)
|
||||
|
||||
.. code::
|
||||
|
||||
decrypt_documents [--passphrase SECR3TP4SSPHRA$E]
|
@@ -1,364 +0,0 @@
|
||||
***************
|
||||
Advanced topics
|
||||
***************
|
||||
|
||||
Paperless offers a couple features that automate certain tasks and make your life
|
||||
easier.
|
||||
|
||||
.. _advanced-matching:
|
||||
|
||||
Matching tags, correspondents, document types, and storage paths
|
||||
################################################################
|
||||
|
||||
Paperless will compare the matching algorithms defined by every tag, correspondent,
|
||||
document type, and storage path in your database to see if they apply to the text
|
||||
in a document. In other words, if you define a tag called ``Home Utility``
|
||||
that had a ``match`` property of ``bc hydro`` and a ``matching_algorithm`` of
|
||||
``literal``, Paperless will automatically tag your newly-consumed document with
|
||||
your ``Home Utility`` tag so long as the text ``bc hydro`` appears in the body
|
||||
of the document somewhere.
|
||||
|
||||
The matching logic is quite powerful. It supports searching the text of your
|
||||
document with different algorithms, and as such, some experimentation may be
|
||||
necessary to get things right.
|
||||
|
||||
In order to have a tag, correspondent, document type, or storage path assigned
|
||||
automatically to newly consumed documents, assign a match and matching algorithm
|
||||
using the web interface. These settings define when to assign tags, correspondents,
|
||||
document types, and storage paths to documents.
|
||||
|
||||
The following algorithms are available:
|
||||
|
||||
* **Any:** Looks for any occurrence of any word provided in match in the PDF.
|
||||
If you define the match as ``Bank1 Bank2``, it will match documents containing
|
||||
either of these terms.
|
||||
* **All:** Requires that every word provided appears in the PDF, albeit not in the
|
||||
order provided.
|
||||
* **Literal:** Matches only if the match appears exactly as provided (i.e. preserve ordering) in the PDF.
|
||||
* **Regular expression:** Parses the match as a regular expression and tries to
|
||||
find a match within the document.
|
||||
* **Fuzzy match:** I don't know. Look at the source.
|
||||
* **Auto:** Tries to automatically match new documents. This does not require you
|
||||
to set a match. See the notes below.
|
||||
|
||||
When using the *any* or *all* matching algorithms, you can search for terms
|
||||
that consist of multiple words by enclosing them in double quotes. For example,
|
||||
defining a match text of ``"Bank of America" BofA`` using the *any* algorithm,
|
||||
will match documents that contain either "Bank of America" or "BofA", but will
|
||||
not match documents containing "Bank of South America".
|
||||
|
||||
Then just save your tag, correspondent, document type, or storage path and run
|
||||
another document through the consumer. Once complete, you should see the
|
||||
newly-created document, automatically tagged with the appropriate data.
|
||||
|
||||
|
||||
.. _advanced-automatic_matching:
|
||||
|
||||
Automatic matching
|
||||
==================
|
||||
|
||||
Paperless-ngx comes with a new matching algorithm called *Auto*. This matching
|
||||
algorithm tries to assign tags, correspondents, document types, and storage paths
|
||||
to your documents based on how you have already assigned these on existing documents.
|
||||
It uses a neural network under the hood.
|
||||
|
||||
If, for example, all your bank statements of your account 123 at the Bank of
|
||||
America are tagged with the tag "bofa_123" and the matching algorithm of this
|
||||
tag is set to *Auto*, this neural network will examine your documents and
|
||||
automatically learn when to assign this tag.
|
||||
|
||||
Paperless tries to hide much of the involved complexity with this approach.
|
||||
However, there are a couple caveats you need to keep in mind when using this
|
||||
feature:
|
||||
|
||||
* Changes to your documents are not immediately reflected by the matching
|
||||
algorithm. The neural network needs to be *trained* on your documents after
|
||||
changes. Paperless periodically (default: once each hour) checks for changes
|
||||
and does this automatically for you.
|
||||
* The Auto matching algorithm only takes documents into account which are NOT
|
||||
placed in your inbox (i.e. have any inbox tags assigned to them). This ensures
|
||||
that the neural network only learns from documents which you have correctly
|
||||
tagged before.
|
||||
* The matching algorithm can only work if there is a correlation between the
|
||||
tag, correspondent, document type, or storage path and the document itself.
|
||||
Your bank statements usually contain your bank account number and the name
|
||||
of the bank, so this works reasonably well, However, tags such as "TODO"
|
||||
cannot be automatically assigned.
|
||||
* The matching algorithm needs a reasonable number of documents to identify when
|
||||
to assign tags, correspondents, storage paths, and types. If one out of a
|
||||
thousand documents has the correspondent "Very obscure web shop I bought
|
||||
something five years ago", it will probably not assign this correspondent
|
||||
automatically if you buy something from them again. The more documents, the better.
|
||||
* Paperless also needs a reasonable amount of negative examples to decide when
|
||||
not to assign a certain tag, correspondent, document type, or storage path. This will
|
||||
usually be the case as you start filling up paperless with documents.
|
||||
Example: If all your documents are either from "Webshop" and "Bank", paperless
|
||||
will assign one of these correspondents to ANY new document, if both are set
|
||||
to automatic matching.
|
||||
|
||||
Hooking into the consumption process
|
||||
####################################
|
||||
|
||||
Sometimes you may want to do something arbitrary whenever a document is
|
||||
consumed. Rather than try to predict what you may want to do, Paperless lets
|
||||
you execute scripts of your own choosing just before or after a document is
|
||||
consumed using a couple simple hooks.
|
||||
|
||||
Just write a script, put it somewhere that Paperless can read & execute, and
|
||||
then put the path to that script in ``paperless.conf`` or ``docker-compose.env`` with the variable name
|
||||
of either ``PAPERLESS_PRE_CONSUME_SCRIPT`` or
|
||||
``PAPERLESS_POST_CONSUME_SCRIPT``.
|
||||
|
||||
.. important::
|
||||
|
||||
These scripts are executed in a **blocking** process, which means that if
|
||||
a script takes a long time to run, it can significantly slow down your
|
||||
document consumption flow. If you want things to run asynchronously,
|
||||
you'll have to fork the process in your script and exit.
|
||||
|
||||
|
||||
Pre-consumption script
|
||||
======================
|
||||
|
||||
Executed after the consumer sees a new document in the consumption folder, but
|
||||
before any processing of the document is performed. This script can access the
|
||||
following relevant environment variables set:
|
||||
|
||||
* ``DOCUMENT_SOURCE_PATH``
|
||||
|
||||
A simple but common example for this would be creating a simple script like
|
||||
this:
|
||||
|
||||
``/usr/local/bin/ocr-pdf``
|
||||
|
||||
.. code:: bash
|
||||
|
||||
#!/usr/bin/env bash
|
||||
pdf2pdfocr.py -i ${DOCUMENT_SOURCE_PATH}
|
||||
|
||||
``/etc/paperless.conf``
|
||||
|
||||
.. code:: bash
|
||||
|
||||
...
|
||||
PAPERLESS_PRE_CONSUME_SCRIPT="/usr/local/bin/ocr-pdf"
|
||||
...
|
||||
|
||||
This will pass the path to the document about to be consumed to ``/usr/local/bin/ocr-pdf``,
|
||||
which will in turn call `pdf2pdfocr.py`_ on your document, which will then
|
||||
overwrite the file with an OCR'd version of the file and exit. At which point,
|
||||
the consumption process will begin with the newly modified file.
|
||||
|
||||
.. _pdf2pdfocr.py: https://github.com/LeoFCardoso/pdf2pdfocr
|
||||
|
||||
.. _advanced-post_consume_script:
|
||||
|
||||
Post-consumption script
|
||||
=======================
|
||||
|
||||
Executed after the consumer has successfully processed a document and has moved it
|
||||
into paperless. It receives the following environment variables:
|
||||
|
||||
* ``DOCUMENT_ID``
|
||||
* ``DOCUMENT_FILE_NAME``
|
||||
* ``DOCUMENT_CREATED``
|
||||
* ``DOCUMENT_MODIFIED``
|
||||
* ``DOCUMENT_ADDED``
|
||||
* ``DOCUMENT_SOURCE_PATH``
|
||||
* ``DOCUMENT_ARCHIVE_PATH``
|
||||
* ``DOCUMENT_THUMBNAIL_PATH``
|
||||
* ``DOCUMENT_DOWNLOAD_URL``
|
||||
* ``DOCUMENT_THUMBNAIL_URL``
|
||||
* ``DOCUMENT_CORRESPONDENT``
|
||||
* ``DOCUMENT_TAGS``
|
||||
|
||||
The script can be in any language, but for a simple shell script
|
||||
example, you can take a look at `post-consumption-example.sh`_ in this project.
|
||||
|
||||
The post consumption script cannot cancel the consumption process.
|
||||
|
||||
Docker
|
||||
------
|
||||
Assumed you have ``/home/foo/paperless-ngx/scripts/post-consumption-example.sh``.
|
||||
|
||||
You can pass that script into the consumer container via a host mount in your ``docker-compose.yml``.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
...
|
||||
consumer:
|
||||
...
|
||||
volumes:
|
||||
...
|
||||
- /home/paperless-ngx/scripts:/path/in/container/scripts/
|
||||
...
|
||||
|
||||
Example (docker-compose.yml): ``- /home/foo/paperless-ngx/scripts:/usr/src/paperless/scripts``
|
||||
|
||||
which in turn requires the variable ``PAPERLESS_POST_CONSUME_SCRIPT`` in ``docker-compose.env`` to point to ``/path/in/container/scripts/post-consumption-example.sh``.
|
||||
|
||||
Example (docker-compose.env): ``PAPERLESS_POST_CONSUME_SCRIPT=/usr/src/paperless/scripts/post-consumption-example.sh``
|
||||
|
||||
Troubleshooting:
|
||||
|
||||
- Monitor the docker-compose log ``cd ~/paperless-ngx; docker-compose logs -f``
|
||||
- Check your script's permission e.g. in case of permission error ``sudo chmod 755 post-consumption-example.sh``
|
||||
- Pipe your scripts's output to a log file e.g. ``echo "${DOCUMENT_ID}" | tee --append /usr/src/paperless/scripts/post-consumption-example.log``
|
||||
|
||||
.. _post-consumption-example.sh: https://github.com/paperless-ngx/paperless-ngx/blob/main/scripts/post-consumption-example.sh
|
||||
|
||||
.. _advanced-file_name_handling:
|
||||
|
||||
File name handling
|
||||
##################
|
||||
|
||||
By default, paperless stores your documents in the media directory and renames them
|
||||
using the identifier which it has assigned to each document. You will end up getting
|
||||
files like ``0000123.pdf`` in your media directory. This isn't necessarily a bad
|
||||
thing, because you normally don't have to access these files manually. However, if
|
||||
you wish to name your files differently, you can do that by adjusting the
|
||||
``PAPERLESS_FILENAME_FORMAT`` configuration option.
|
||||
|
||||
This variable allows you to configure the filename (folders are allowed) using
|
||||
placeholders. For example, configuring this to
|
||||
|
||||
.. code:: bash
|
||||
|
||||
PAPERLESS_FILENAME_FORMAT={created_year}/{correspondent}/{title}
|
||||
|
||||
will create a directory structure as follows:
|
||||
|
||||
.. code::
|
||||
|
||||
2019/
|
||||
My bank/
|
||||
Statement January.pdf
|
||||
Statement February.pdf
|
||||
2020/
|
||||
My bank/
|
||||
Statement January.pdf
|
||||
Letter.pdf
|
||||
Letter_01.pdf
|
||||
Shoe store/
|
||||
My new shoes.pdf
|
||||
|
||||
.. danger::
|
||||
|
||||
Do not manually move your files in the media folder. Paperless remembers the
|
||||
last filename a document was stored as. If you do rename a file, paperless will
|
||||
report your files as missing and won't be able to find them.
|
||||
|
||||
Paperless provides the following placeholders within filenames:
|
||||
|
||||
* ``{asn}``: The archive serial number of the document, or "none".
|
||||
* ``{correspondent}``: The name of the correspondent, or "none".
|
||||
* ``{document_type}``: The name of the document type, or "none".
|
||||
* ``{tag_list}``: A comma separated list of all tags assigned to the document.
|
||||
* ``{title}``: The title of the document.
|
||||
* ``{created}``: The full date (ISO format) the document was created.
|
||||
* ``{created_year}``: Year created only.
|
||||
* ``{created_month}``: Month created only (number 01-12).
|
||||
* ``{created_day}``: Day created only (number 01-31).
|
||||
* ``{added}``: The full date (ISO format) the document was added to paperless.
|
||||
* ``{added_year}``: Year added only.
|
||||
* ``{added_month}``: Month added only (number 01-12).
|
||||
* ``{added_day}``: Day added only (number 01-31).
|
||||
|
||||
|
||||
Paperless will try to conserve the information from your database as much as possible.
|
||||
However, some characters that you can use in document titles and correspondent names (such
|
||||
as ``: \ /`` and a couple more) are not allowed in filenames and will be replaced with dashes.
|
||||
|
||||
If paperless detects that two documents share the same filename, paperless will automatically
|
||||
append ``_01``, ``_02``, etc to the filename. This happens if all the placeholders in a filename
|
||||
evaluate to the same value.
|
||||
|
||||
.. hint::
|
||||
You can affect how empty placeholders are treated by changing the following setting to
|
||||
`true`.
|
||||
|
||||
.. code::
|
||||
|
||||
PAPERLESS_FILENAME_FORMAT_REMOVE_NONE=True
|
||||
|
||||
Doing this results in all empty placeholders resolving to "" instead of "none" as stated above.
|
||||
Spaces before empty placeholders are removed as well, empty directories are omitted.
|
||||
|
||||
.. hint::
|
||||
|
||||
Paperless checks the filename of a document whenever it is saved. Therefore,
|
||||
you need to update the filenames of your documents and move them after altering
|
||||
this setting by invoking the :ref:`document renamer <utilities-renamer>`.
|
||||
|
||||
.. warning::
|
||||
|
||||
Make absolutely sure you get the spelling of the placeholders right, or else
|
||||
paperless will use the default naming scheme instead.
|
||||
|
||||
.. caution::
|
||||
|
||||
As of now, you could totally tell paperless to store your files anywhere outside
|
||||
the media directory by setting
|
||||
|
||||
.. code::
|
||||
|
||||
PAPERLESS_FILENAME_FORMAT=../../my/custom/location/{title}
|
||||
|
||||
However, keep in mind that inside docker, if files get stored outside of the
|
||||
predefined volumes, they will be lost after a restart of paperless.
|
||||
|
||||
|
||||
Storage paths
|
||||
#############
|
||||
|
||||
One of the best things in Paperless is that you can not only access the documents via the
|
||||
web interface, but also via the file system.
|
||||
|
||||
When as single storage layout is not sufficient for your use case, storage paths come to
|
||||
the rescue. Storage paths allow you to configure more precisely where each document is stored
|
||||
in the file system.
|
||||
|
||||
- Each storage path is a `PAPERLESS_FILENAME_FORMAT` and follows the rules described above
|
||||
- Each document is assigned a storage path using the matching algorithms described above, but
|
||||
can be overwritten at any time
|
||||
|
||||
For example, you could define the following two storage paths:
|
||||
|
||||
1. Normal communications are put into a folder structure sorted by `year/correspondent`
|
||||
2. Communications with insurance companies are stored in a flat structure with longer file names,
|
||||
but containing the full date of the correspondence.
|
||||
|
||||
.. code::
|
||||
|
||||
By Year = {created_year}/{correspondent}/{title}
|
||||
Insurances = Insurances/{correspondent}/{created_year}-{created_month}-{created_day} {title}
|
||||
|
||||
|
||||
If you then map these storage paths to the documents, you might get the following result.
|
||||
For simplicity, `By Year` defines the same structure as in the previous example above.
|
||||
|
||||
.. code:: text
|
||||
|
||||
2019/ # By Year
|
||||
My bank/
|
||||
Statement January.pdf
|
||||
Statement February.pdf
|
||||
|
||||
Insurances/ # Insurances
|
||||
Healthcare 123/
|
||||
2022-01-01 Statement January.pdf
|
||||
2022-02-02 Letter.pdf
|
||||
2022-02-03 Letter.pdf
|
||||
Dental 456/
|
||||
2021-12-01 New Conditions.pdf
|
||||
|
||||
|
||||
.. hint::
|
||||
|
||||
Defining a storage path is optional. If no storage path is defined for a document, the global
|
||||
`PAPERLESS_FILENAME_FORMAT` is applied.
|
||||
|
||||
.. caution::
|
||||
|
||||
If you adjust the format of an existing storage path, old documents don't get relocated automatically.
|
||||
You need to run the :ref:`document renamer <utilities-renamer>` to adjust their pathes.
|
306
docs/api.rst
@@ -1,303 +1,23 @@
|
||||
.. _api:
|
||||
|
||||
************
|
||||
The REST API
|
||||
************
|
||||
############
|
||||
|
||||
|
||||
Paperless makes use of the `Django REST Framework`_ standard API interface.
|
||||
It provides a browsable API for most of its endpoints, which you can inspect
|
||||
at ``http://<paperless-host>:<port>/api/``. This also documents most of the
|
||||
available filters and ordering fields.
|
||||
Paperless makes use of the `Django REST Framework`_ standard API interface
|
||||
because of its inherent awesomeness. Conveniently, the system is also
|
||||
self-documenting, so to learn more about the access points, schema, what's
|
||||
accepted and what isn't, you need only visit ``/api`` on your local Paperless
|
||||
installation.
|
||||
|
||||
.. _Django REST Framework: http://django-rest-framework.org/
|
||||
|
||||
The API provides 5 main endpoints:
|
||||
|
||||
* ``/api/documents/``: Full CRUD support, except POSTing new documents. See below.
|
||||
* ``/api/correspondents/``: Full CRUD support.
|
||||
* ``/api/document_types/``: Full CRUD support.
|
||||
* ``/api/logs/``: Read-Only.
|
||||
* ``/api/tags/``: Full CRUD support.
|
||||
.. _api-uploading:
|
||||
|
||||
All of these endpoints except for the logging endpoint
|
||||
allow you to fetch, edit and delete individual objects
|
||||
by appending their primary key to the path, for example ``/api/documents/454/``.
|
||||
|
||||
The objects served by the document endpoint contain the following fields:
|
||||
|
||||
* ``id``: ID of the document. Read-only.
|
||||
* ``title``: Title of the document.
|
||||
* ``content``: Plain text content of the document.
|
||||
* ``tags``: List of IDs of tags assigned to this document, or empty list.
|
||||
* ``document_type``: Document type of this document, or null.
|
||||
* ``correspondent``: Correspondent of this document or null.
|
||||
* ``created``: The date time at which this document was created.
|
||||
* ``created_date``: The date (YYYY-MM-DD) at which this document was created. Optional. If also passed with created, this is ignored.
|
||||
* ``modified``: The date at which this document was last edited in paperless. Read-only.
|
||||
* ``added``: The date at which this document was added to paperless. Read-only.
|
||||
* ``archive_serial_number``: The identifier of this document in a physical document archive.
|
||||
* ``original_file_name``: Verbose filename of the original document. Read-only.
|
||||
* ``archived_file_name``: Verbose filename of the archived document. Read-only. Null if no archived document is available.
|
||||
|
||||
|
||||
Downloading documents
|
||||
#####################
|
||||
|
||||
In addition to that, the document endpoint offers these additional actions on
|
||||
individual documents:
|
||||
|
||||
* ``/api/documents/<pk>/download/``: Download the document.
|
||||
* ``/api/documents/<pk>/preview/``: Display the document inline,
|
||||
without downloading it.
|
||||
* ``/api/documents/<pk>/thumb/``: Download the PNG thumbnail of a document.
|
||||
|
||||
Paperless generates archived PDF/A documents from consumed files and stores both
|
||||
the original files as well as the archived files. By default, the endpoints
|
||||
for previews and downloads serve the archived file, if it is available.
|
||||
Otherwise, the original file is served.
|
||||
Some document cannot be archived.
|
||||
|
||||
The endpoints correctly serve the response header fields ``Content-Disposition``
|
||||
and ``Content-Type`` to indicate the filename for download and the type of content of
|
||||
the document.
|
||||
|
||||
In order to download or preview the original document when an archived document is available,
|
||||
supply the query parameter ``original=true``.
|
||||
|
||||
.. hint::
|
||||
|
||||
Paperless used to provide these functionality at ``/fetch/<pk>/preview``,
|
||||
``/fetch/<pk>/thumb`` and ``/fetch/<pk>/doc``. Redirects to the new URLs
|
||||
are in place. However, if you use these old URLs to access documents, you
|
||||
should update your app or script to use the new URLs.
|
||||
|
||||
|
||||
Getting document metadata
|
||||
#########################
|
||||
|
||||
The api also has an endpoint to retrieve read-only metadata about specific documents. this
|
||||
information is not served along with the document objects, since it requires reading
|
||||
files and would therefore slow down document lists considerably.
|
||||
|
||||
Access the metadata of a document with an ID ``id`` at ``/api/documents/<id>/metadata/``.
|
||||
|
||||
The endpoint reports the following data:
|
||||
|
||||
* ``original_checksum``: MD5 checksum of the original document.
|
||||
* ``original_size``: Size of the original document, in bytes.
|
||||
* ``original_mime_type``: Mime type of the original document.
|
||||
* ``media_filename``: Current filename of the document, under which it is stored inside the media directory.
|
||||
* ``has_archive_version``: True, if this document is archived, false otherwise.
|
||||
* ``original_metadata``: A list of metadata associated with the original document. See below.
|
||||
* ``archive_checksum``: MD5 checksum of the archived document, or null.
|
||||
* ``archive_size``: Size of the archived document in bytes, or null.
|
||||
* ``archive_metadata``: Metadata associated with the archived document, or null. See below.
|
||||
|
||||
File metadata is reported as a list of objects in the following form:
|
||||
|
||||
.. code:: json
|
||||
|
||||
[
|
||||
{
|
||||
"namespace": "http://ns.adobe.com/pdf/1.3/",
|
||||
"prefix": "pdf",
|
||||
"key": "Producer",
|
||||
"value": "SparklePDF, Fancy edition"
|
||||
},
|
||||
]
|
||||
|
||||
``namespace`` and ``prefix`` can be null. The actual metadata reported depends on the file type and the metadata
|
||||
available in that specific document. Paperless only reports PDF metadata at this point.
|
||||
|
||||
Authorization
|
||||
#############
|
||||
|
||||
The REST api provides three different forms of authentication.
|
||||
|
||||
1. Basic authentication
|
||||
|
||||
Authorize by providing a HTTP header in the form
|
||||
|
||||
.. code::
|
||||
|
||||
Authorization: Basic <credentials>
|
||||
|
||||
where ``credentials`` is a base64-encoded string of ``<username>:<password>``
|
||||
|
||||
2. Session authentication
|
||||
|
||||
When you're logged into paperless in your browser, you're automatically
|
||||
logged into the API as well and don't need to provide any authorization
|
||||
headers.
|
||||
|
||||
3. Token authentication
|
||||
|
||||
Paperless also offers an endpoint to acquire authentication tokens.
|
||||
|
||||
POST a username and password as a form or json string to ``/api/token/``
|
||||
and paperless will respond with a token, if the login data is correct.
|
||||
This token can be used to authenticate other requests with the
|
||||
following HTTP header:
|
||||
|
||||
.. code::
|
||||
|
||||
Authorization: Token <token>
|
||||
|
||||
Tokens can be managed and revoked in the paperless admin.
|
||||
|
||||
Searching for documents
|
||||
#######################
|
||||
|
||||
Full text searching is available on the ``/api/documents/`` endpoint. Two specific
|
||||
query parameters cause the API to return full text search results:
|
||||
|
||||
* ``/api/documents/?query=your%20search%20query``: Search for a document using a full text query.
|
||||
For details on the syntax, see :ref:`basic-usage_searching`.
|
||||
|
||||
* ``/api/documents/?more_like=1234``: Search for documents similar to the document with id 1234.
|
||||
|
||||
Pagination works exactly the same as it does for normal requests on this endpoint.
|
||||
|
||||
Certain limitations apply to full text queries:
|
||||
|
||||
* Results are always sorted by search score. The results matching the query best will show up first.
|
||||
|
||||
* Only a small subset of filtering parameters are supported.
|
||||
|
||||
Furthermore, each returned document has an additional ``__search_hit__`` attribute with various information
|
||||
about the search results:
|
||||
|
||||
.. code::
|
||||
|
||||
{
|
||||
"count": 31,
|
||||
"next": "http://localhost:8000/api/documents/?page=2&query=test",
|
||||
"previous": null,
|
||||
"results": [
|
||||
|
||||
...
|
||||
|
||||
{
|
||||
"id": 123,
|
||||
"title": "title",
|
||||
"content": "content",
|
||||
|
||||
...
|
||||
|
||||
"__search_hit__": {
|
||||
"score": 0.343,
|
||||
"highlights": "text <span class=\"match\">Test</span> text",
|
||||
"rank": 23
|
||||
}
|
||||
},
|
||||
|
||||
...
|
||||
|
||||
]
|
||||
}
|
||||
|
||||
* ``score`` is an indication how well this document matches the query relative to the other search results.
|
||||
* ``highlights`` is an excerpt from the document content and highlights the search terms with ``<span>`` tags as shown above.
|
||||
* ``rank`` is the index of the search results. The first result will have rank 0.
|
||||
|
||||
``/api/search/autocomplete/``
|
||||
=============================
|
||||
|
||||
Get auto completions for a partial search term.
|
||||
|
||||
Query parameters:
|
||||
|
||||
* ``term``: The incomplete term.
|
||||
* ``limit``: Amount of results. Defaults to 10.
|
||||
|
||||
Results returned by the endpoint are ordered by importance of the term in the
|
||||
document index. The first result is the term that has the highest Tf/Idf score
|
||||
in the index.
|
||||
|
||||
.. code:: json
|
||||
|
||||
[
|
||||
"term1",
|
||||
"term3",
|
||||
"term6",
|
||||
"term4"
|
||||
]
|
||||
|
||||
|
||||
.. _api-file_uploads:
|
||||
|
||||
POSTing documents
|
||||
#################
|
||||
|
||||
The API provides a special endpoint for file uploads:
|
||||
|
||||
``/api/documents/post_document/``
|
||||
|
||||
POST a multipart form to this endpoint, where the form field ``document`` contains
|
||||
the document that you want to upload to paperless. The filename is sanitized and
|
||||
then used to store the document in a temporary directory, and the consumer will
|
||||
be instructed to consume the document from there.
|
||||
|
||||
The endpoint supports the following optional form fields:
|
||||
|
||||
* ``title``: Specify a title that the consumer should use for the document.
|
||||
* ``created``: Specify a DateTime where the document was created (e.g. "2016-04-19" or "2016-04-19 06:15:00+02:00").
|
||||
* ``correspondent``: Specify the ID of a correspondent that the consumer should use for the document.
|
||||
* ``document_type``: Similar to correspondent.
|
||||
* ``tags``: Similar to correspondent. Specify this multiple times to have multiple tags added
|
||||
to the document.
|
||||
|
||||
|
||||
The endpoint will immediately return "OK" if the document consumption process
|
||||
was started successfully. No additional status information about the consumption
|
||||
process itself is available, since that happens in a different process.
|
||||
|
||||
|
||||
.. _api-versioning:
|
||||
|
||||
API Versioning
|
||||
##############
|
||||
|
||||
The REST API is versioned since Paperless-ngx 1.3.0.
|
||||
|
||||
* Versioning ensures that changes to the API don't break older clients.
|
||||
* Clients specify the specific version of the API they wish to use with every request and Paperless will handle the request using the specified API version.
|
||||
* Even if the underlying data model changes, older API versions will always serve compatible data.
|
||||
* If no version is specified, Paperless will serve version 1 to ensure compatibility with older clients that do not request a specific API version.
|
||||
|
||||
API versions are specified by submitting an additional HTTP ``Accept`` header with every request:
|
||||
|
||||
.. code::
|
||||
|
||||
Accept: application/json; version=6
|
||||
|
||||
If an invalid version is specified, Paperless 1.3.0 will respond with "406 Not Acceptable" and an error message in the body.
|
||||
Earlier versions of Paperless will serve API version 1 regardless of whether a version is specified via the ``Accept`` header.
|
||||
|
||||
If a client wishes to verify whether it is compatible with any given server, the following procedure should be performed:
|
||||
|
||||
1. Perform an *authenticated* request against any API endpoint. If the server is on version 1.3.0 or newer, the server will
|
||||
add two custom headers to the response:
|
||||
|
||||
.. code::
|
||||
|
||||
X-Api-Version: 2
|
||||
X-Version: 1.3.0
|
||||
|
||||
2. Determine whether the client is compatible with this server based on the presence/absence of these headers and their values if present.
|
||||
|
||||
|
||||
API Changelog
|
||||
=============
|
||||
|
||||
Version 1
|
||||
Uploading
|
||||
---------
|
||||
|
||||
Initial API version.
|
||||
|
||||
Version 2
|
||||
---------
|
||||
|
||||
* Added field ``Tag.color``. This read/write string field contains a hex color such as ``#a6cee3``.
|
||||
* Added read-only field ``Tag.text_color``. This field contains the text color to use for a specific tag, which is either black or white depending on the brightness of ``Tag.color``.
|
||||
* Removed field ``Tag.colour``.
|
||||
File uploads in an API are hard and so far as I've been able to tell, there's
|
||||
no standard way of accepting them, so rather than crowbar file uploads into the
|
||||
REST API and endure that headache, I've left that process to a simple HTTP
|
||||
POST, documented on the :ref:`consumption page <consumption-http>`.
|
||||
|
1947
docs/changelog.md
700
docs/changelog.rst
Normal file
@@ -0,0 +1,700 @@
|
||||
Changelog
|
||||
#########
|
||||
|
||||
2.6.0
|
||||
=====
|
||||
|
||||
* Allow an infinite number of logs to be deleted. Thanks to `Ulli`_ for noting
|
||||
the problem in `#433`_.
|
||||
* Fix the ``RecentCorrespondentsFilter`` correspondents filter that was added
|
||||
in 2.4 to play nice with the defaults. Thanks to `tsia`_ and `Sblop`_ who
|
||||
pointed this out. `#423`_.
|
||||
* Updated dependencies to include (among other things) a security patch to
|
||||
requests.
|
||||
* Fix text in sample data for tests so that the language guesser stops thinking
|
||||
that everything is in Catalan because we had *Lorem ipsum* in there.
|
||||
* Tweaked the gunicorn sample command to use filesystem paths instead of Python
|
||||
paths. `#441`_
|
||||
* Added pretty colour boxes next to the hex values in the Tags section, thanks
|
||||
to a pull request from `Joshua Taillon`_ `#442`_.
|
||||
* Added a ``.editorconfig`` file to better specify coding style.
|
||||
* `Joshua Taillon`_ also added some logic to tie Paperless' date guessing logic
|
||||
into how it parses file names on import. `#440`_
|
||||
|
||||
|
||||
2.5.0
|
||||
=====
|
||||
|
||||
* **New dependency**: Paperless now optimises thumbnail generation with
|
||||
`optipng`_, so you'll need to install that somewhere in your PATH or declare
|
||||
its location in ``PAPERLESS_OPTIPNG_BINARY``. The Docker image has already
|
||||
been updated on the Docker Hub, so you just need to pull the latest one from
|
||||
there if you're a Docker user.
|
||||
|
||||
* "Login free" instances of Paperless were breaking whenever you tried to edit
|
||||
objects in the admin: adding/deleting tags or correspondents, or even fixing
|
||||
spelling. This was due to the "user hack" we were applying to sessions that
|
||||
weren't using a login, as that hack user didn't have a valid id. The fix was
|
||||
to attribute the first user id in the system to this hack user. `#394`_
|
||||
|
||||
* A problem in how we handle slug values on Tags and Correspondents required a
|
||||
few changes to how we handle this field `#393`_:
|
||||
|
||||
1. Slugs are no longer editable. They're derived from the name of the tag or
|
||||
correspondent at save time, so if you wanna change the slug, you have to
|
||||
change the name, and even then you're restricted to the rules of the
|
||||
``slugify()`` function. The slug value is still visible in the admin
|
||||
though.
|
||||
2. I've added a migration to go over all existing tags & correspondents and
|
||||
rewrite the ``.slug`` values to ones conforming to the ``slugify()``
|
||||
rules.
|
||||
3. The consumption process now uses the same rules as ``.save()`` in
|
||||
determining a slug and using that to check for an existing
|
||||
tag/correspondent.
|
||||
|
||||
* An annoying bug in the date capture code was causing some bogus dates to be
|
||||
attached to documents, which in turn busted the UI. Thanks to `Andrew Peng`_
|
||||
for reporting this. `#414`_.
|
||||
|
||||
* A bug in the Dockerfile meant that Tesseract language files weren't being
|
||||
installed correctly. `euri10`_ was quick to provide a fix: `#406`_, `#413`_.
|
||||
|
||||
* Document consumption is now wrapped in a transaction as per an old ticket
|
||||
`#262`_.
|
||||
|
||||
* The ``get_date()`` functionality of the parsers has been consolidated onto
|
||||
the ``DocumentParser`` class since much of that code was redundant anyway.
|
||||
|
||||
|
||||
2.4.0
|
||||
=====
|
||||
|
||||
* A new set of actions are now available thanks to `jonaswinkler`_'s very first
|
||||
pull request! You can now do nifty things like tag documents in bulk, or set
|
||||
correspondents in bulk. `#405`_
|
||||
* The import/export system is now a little smarter. By default, documents are
|
||||
tagged as ``unencrypted``, since exports are by their nature unencrypted.
|
||||
It's now in the import step that we decide the storage type. This allows you
|
||||
to export from an encrypted system and import into an unencrypted one, or
|
||||
vice-versa.
|
||||
* The migration history has been slightly modified to accommodate PostgreSQL
|
||||
users. Additionally, you can now tell paperless to use PostgreSQL simply by
|
||||
declaring ``PAPERLESS_DBUSER`` in your environment. This will attempt to
|
||||
connect to your Postgres database without a password unless you also set
|
||||
``PAPERLESS_DBPASS``.
|
||||
* A bug was found in the REST API filter system that was the result of an
|
||||
update of django-filter some time ago. This has now been patched in `#412`_.
|
||||
Thanks to `thepill`_ for spotting it!
|
||||
|
||||
|
||||
2.3.0
|
||||
=====
|
||||
|
||||
* Support for consuming plain text & markdown documents was added by
|
||||
`Joshua Taillon`_! This was a long-requested feature, and it's addition is
|
||||
likely to be greatly appreciated by the community: `#395`_ Thanks also to
|
||||
`David Martin`_ for his assistance on the issue.
|
||||
* `dubit0`_ found & fixed a bug that prevented management commands from running
|
||||
before we had an operational database: `#396`_
|
||||
* Joshua also added a simple update to the thumbnail generation process to
|
||||
improve performance: `#399`_
|
||||
* As his last bit of effort on this release, Joshua also added some code to
|
||||
allow you to view the documents inline rather than download them as an
|
||||
attachment. `#400`_
|
||||
* Finally, `ahyear`_ found a slip in the Docker documentation and patched it.
|
||||
`#401`_
|
||||
|
||||
|
||||
2.2.1
|
||||
=====
|
||||
|
||||
* `Kyle Lucy`_ reported a bug quickly after the release of 2.2.0 where we broke
|
||||
the ``DISABLE_LOGIN`` feature: `#392`_.
|
||||
|
||||
|
||||
2.2.0
|
||||
=====
|
||||
|
||||
* Thanks to `dadosch`_, `Wolfgang Mader`_, and `Tim Brooks`_ this is the first
|
||||
version of Paperless that supports Django 2.0! As a result of their hard
|
||||
work, you can now also run Paperless on Python 3.7 as well: `#386`_ &
|
||||
`#390`_.
|
||||
* `Stéphane Brunner`_ added a few lines of code that made tagging interface a
|
||||
lot easier on those of us with lots of different tags: `#391`_.
|
||||
* `Kilian Koeltzsch`_ noticed a bug in how we capture & automatically create
|
||||
tags, so that's fixed now too: `#384`_.
|
||||
* `erikarvstedt`_ tweaked the behaviour of the test suite to be better behaved
|
||||
for packaging environments: `#383`_.
|
||||
* `Lukasz Soluch`_ added CORS support to make building a new Javascript-based
|
||||
front-end cleaner & easier: `#387`_.
|
||||
|
||||
|
||||
2.1.0
|
||||
=====
|
||||
|
||||
* `Enno Lohmeier`_ added three simple features that make Paperless a lot more
|
||||
user (and developer) friendly:
|
||||
|
||||
1. There's a new search box on the front page: `#374`_.
|
||||
2. The correspondents & tags pages now have a column showing the number of
|
||||
relevant documents: `#375`_.
|
||||
3. The Dockerfile has been tweaked to build faster for those of us who are
|
||||
doing active development on Paperless using the Docker environment:
|
||||
`#376`_.
|
||||
|
||||
* You now also have the ability to customise the interface to your heart's
|
||||
content by creating a file called ``overrides.css`` and/or ``overrides.js``
|
||||
in the root of your media directory. Thanks to `Mark McFate`_ for this
|
||||
idea: `#371`_
|
||||
|
||||
|
||||
2.0.0
|
||||
=====
|
||||
|
||||
This is a big release as we've changed a core-functionality of Paperless: we no
|
||||
longer encrypt files with GPG by default.
|
||||
|
||||
The reasons for this are many, but it boils down to that the encryption wasn't
|
||||
really all that useful, as files on-disk were still accessible so long as you
|
||||
had the key, and the key was most typically stored in the config file. In
|
||||
other words, your files are only as safe as the ``paperless`` user is. In
|
||||
addition to that, *the contents of the documents were never encrypted*, so
|
||||
important numbers etc. were always accessible simply by querying the database.
|
||||
Still, it was better than nothing, but the consensus from users appears to be
|
||||
that it was more an annoyance than anything else, so this feature is now turned
|
||||
off unless you explicitly set a passphrase in your config file.
|
||||
|
||||
Migrating from 1.x
|
||||
------------------
|
||||
|
||||
Encryption isn't gone, it's just off for new users. So long as you have
|
||||
``PAPERLESS_PASSPHRASE`` set in your config or your environment, Paperless
|
||||
should continue to operate as it always has. If however, you want to drop
|
||||
encryption too, you only need to do two things:
|
||||
|
||||
1. Run ``./manage.py migrate && ./manage.py change_storage_type gpg unencrypted``.
|
||||
This will go through your entire database and Decrypt All The Things.
|
||||
2. Remove ``PAPERLESS_PASSPHRASE`` from your ``paperless.conf`` file, or simply
|
||||
stop declaring it in your environment.
|
||||
|
||||
Special thanks to `erikarvstedt`_, `matthewmoto`_, and `mcronce`_ who did the
|
||||
bulk of the work on this big change.
|
||||
|
||||
1.4.0
|
||||
=====
|
||||
|
||||
* `Quentin Dawans`_ has refactored the document consumer to allow for some
|
||||
command-line options. Notably, you can now direct it to consume from a
|
||||
particular ``--directory``, limit the ``--loop-time``, set the time between
|
||||
mail server checks with ``--mail-delta`` or just run it as a one-off with
|
||||
``--one-shot``. See `#305`_ & `#313`_ for more information.
|
||||
* Refactor the use of travis/tox/pytest/coverage into two files:
|
||||
``.travis.yml`` and ``setup.cfg``.
|
||||
* Start generating requirements.txt from a Pipfile. I'll probably switch over
|
||||
to just using pipenv in the future.
|
||||
* All for a alternative FreeBSD-friendly location for ``paperless.conf``.
|
||||
Thanks to `Martin Arendtsen`_ who provided this (`#322`_).
|
||||
* Document consumption events are now logged in the Django admin events log.
|
||||
Thanks to `CkuT`_ for doing the legwork on this one and to `Quentin Dawans`_
|
||||
& `David Martin`_ for helping to coordinate & work out how the feature would
|
||||
be developed.
|
||||
* `erikarvstedt`_ contributed a pull request (`#328`_) to add ``--noreload``
|
||||
to the default server start process. This helps reduce the load imposed
|
||||
by the running webservice.
|
||||
* Through some discussion on `#253`_ and `#323`_, we've removed a few of the
|
||||
hardcoded URL values to make it easier for people to host Paperless on a
|
||||
subdirectory. Thanks to `Quentin Dawans`_ and `Kyle Lucy`_ for helping to
|
||||
work this out.
|
||||
* The clickable area for documents on the listing page has been increased to a
|
||||
more predictable space thanks to a glorious hack from `erikarvstedt`_ in
|
||||
`#344`_.
|
||||
* `Strubbl`_ noticed an annoying bug in the bash script wrapping the Docker
|
||||
entrypoint and fixed it with some very creating Bash skills: `#352`_.
|
||||
* You can now use the search field to find documents by tag thanks to
|
||||
`thinkjk`_'s *first ever issue*: `#354`_.
|
||||
* Inotify is now being used to detect additions to the consume directory thanks
|
||||
to some excellent work from `erikarvstedt`_ on `#351`_
|
||||
|
||||
1.3.0
|
||||
=====
|
||||
|
||||
* You can now run Paperless without a login, though you'll still have to create
|
||||
at least one user. This is thanks to a pull-request from `matthewmoto`_:
|
||||
`#295`_. Note that logins are still required by default, and that you need
|
||||
to disable them by setting ``PAPERLESS_DISABLE_LOGIN="true"`` in your
|
||||
environment or in ``/etc/paperless.conf``.
|
||||
* Fix for `#303`_ where sketchily-formatted documents could cause the consumer
|
||||
to break and insert half-records into the database breaking all sorts of
|
||||
things. We now capture the return codes of both ``convert`` and ``unpaper``
|
||||
and fail-out nicely.
|
||||
* Fix for additional date types thanks to input from `Isaac`_ and code from
|
||||
`BastianPoe`_ (`#301`_).
|
||||
* Fix for running migrations in the Docker container (`#299`_). Thanks to
|
||||
`Georgi Todorov`_ for the fix (`#300`_) and to `Pit`_ for the review.
|
||||
* Fix for Docker cases where the issuing user is not UID 1000. This was a
|
||||
collaborative fix between `Jeffrey Portman`_ and `Pit`_ in `#311`_ and
|
||||
`#312`_ to fix `#306`_.
|
||||
* Patch the historical migrations to support MySQL's um, *interesting* way of
|
||||
handing indexes (`#308`_). Thanks to `Simon Taddiken`_ for reporting the
|
||||
problem and helping me find where to fix it.
|
||||
|
||||
1.2.0
|
||||
=====
|
||||
|
||||
* New Docker image, now based on Alpine, thanks to the efforts of `addadi`_
|
||||
and `Pit`_. This new image is dramatically smaller than the Debian-based
|
||||
one, and it also has `a new home on Docker Hub`_. A proper thank-you to
|
||||
`Pit`_ for hosting the image on his Docker account all this time, but after
|
||||
some discussion, we decided the image needed a more *official-looking* home.
|
||||
* `BastianPoe`_ has added the long-awaited feature to automatically skip the
|
||||
OCR step when the PDF already contains text. This can be overridden by
|
||||
setting ``PAPERLESS_OCR_ALWAYS=YES`` either in your ``paperless.conf`` or
|
||||
in the environment. Note that this also means that Paperless now requires
|
||||
``libpoppler-cpp-dev`` to be installed. **Important**: You'll need to run
|
||||
``pip install -r requirements.txt`` after the usual ``git pull`` to
|
||||
properly update.
|
||||
* `BastianPoe`_ has also contributed a monumental amount of work (`#291`_) to
|
||||
solving `#158`_: setting the document creation date based on finding a date
|
||||
in the document text.
|
||||
|
||||
1.1.0
|
||||
=====
|
||||
|
||||
* Fix for `#283`_, a redirect bug which broke interactions with
|
||||
paperless-desktop. Thanks to `chris-aeviator`_ for reporting it.
|
||||
* Addition of an optional new financial year filter, courtesy of
|
||||
`David Martin`_ `#256`_
|
||||
* Fixed a typo in how thumbnails were named in exports `#285`_, courtesy of
|
||||
`Dan Panzarella`_
|
||||
|
||||
1.0.0
|
||||
=====
|
||||
|
||||
* Upgrade to Django 1.11. **You'll need to run
|
||||
``pip install -r requirements.txt`` after the usual ``git pull`` to
|
||||
properly update**.
|
||||
* Replace the templatetag-based hack we had for document listing in favour of
|
||||
a slightly less ugly solution in the form of another template tag with less
|
||||
copypasta.
|
||||
* Support for multi-word-matches for auto-tagging thanks to an excellent
|
||||
patch from `ishirav`_ `#277`_.
|
||||
* Fixed a CSS bug reported by `Stefan Hagen`_ that caused an overlapping of
|
||||
the text and checkboxes under some resolutions `#272`_.
|
||||
* Patched the Docker config to force the serving of static files. Credit for
|
||||
this one goes to `dev-rke`_ via `#248`_.
|
||||
* Fix file permissions during Docker start up thanks to `Pit`_ on `#268`_.
|
||||
* Date fields in the admin are now expressed as HTML5 date fields thanks to
|
||||
`Lukas Winkler`_'s issue `#278`_
|
||||
|
||||
0.8.0
|
||||
=====
|
||||
|
||||
* Paperless can now run in a subdirectory on a host (``/paperless``), rather
|
||||
than always running in the root (``/``) thanks to `maphy-psd`_'s work on
|
||||
`#255`_.
|
||||
|
||||
0.7.0
|
||||
=====
|
||||
|
||||
* **Potentially breaking change**: As per `#235`_, Paperless will no longer
|
||||
automatically delete documents attached to correspondents when those
|
||||
correspondents are themselves deleted. This was Django's default
|
||||
behaviour, but didn't make much sense in Paperless' case. Thanks to
|
||||
`Thomas Brueggemann`_ and `David Martin`_ for their input on this one.
|
||||
* Fix for `#232`_ wherein Paperless wasn't recognising ``.tif`` files
|
||||
properly. Thanks to `ayounggun`_ for reporting this one and to
|
||||
`Kusti Skytén`_ for posting the correct solution in the Github issue.
|
||||
|
||||
0.6.0
|
||||
=====
|
||||
|
||||
* Abandon the shared-secret trick we were using for the POST API in favour
|
||||
of BasicAuth or Django session.
|
||||
* Fix the POST API so it actually works. `#236`_
|
||||
* **Breaking change**: We've dropped the use of ``PAPERLESS_SHARED_SECRET``
|
||||
as it was being used both for the API (now replaced with a normal auth)
|
||||
and form email polling. Now that we're only using it for email, this
|
||||
variable has been renamed to ``PAPERLESS_EMAIL_SECRET``. The old value
|
||||
will still work for a while, but you should change your config if you've
|
||||
been using the email polling feature. Thanks to `Joshua Gilman`_ for all
|
||||
the help with this feature.
|
||||
|
||||
0.5.0
|
||||
=====
|
||||
|
||||
* Support for fuzzy matching in the auto-tagger & auto-correspondent systems
|
||||
thanks to `Jake Gysland`_'s patch `#220`_.
|
||||
* Modified the Dockerfile to prepare an export directory (`#212`_). Thanks
|
||||
to combined efforts from `Pit`_ and `Strubbl`_ in working out the kinks on
|
||||
this one.
|
||||
* Updated the import/export scripts to include support for thumbnails. Big
|
||||
thanks to `CkuT`_ for finding this shortcoming and doing the work to get
|
||||
it fixed in `#224`_.
|
||||
* All of the following changes are thanks to `David Martin`_:
|
||||
* Bumped the dependency on pyocr to 0.4.7 so new users can make use of
|
||||
Tesseract 4 if they so prefer (`#226`_).
|
||||
* Fixed a number of issues with the automated mail handler (`#227`_, `#228`_)
|
||||
* Amended the documentation for better handling of systemd service files (`#229`_)
|
||||
* Amended the Django Admin configuration to have nice headers (`#230`_)
|
||||
|
||||
0.4.1
|
||||
=====
|
||||
|
||||
* Fix for `#206`_ wherein the pluggable parser didn't recognise files with
|
||||
all-caps suffixes like ``.PDF``
|
||||
|
||||
0.4.0
|
||||
=====
|
||||
|
||||
* Introducing reminders. See `#199`_ for more information, but the short
|
||||
explanation is that you can now attach simple notes & times to documents
|
||||
which are made available via the API. Currently, the default API
|
||||
(basically just the Django admin) doesn't really make use of this, but
|
||||
`Thomas Brueggemann`_ over at `Paperless Desktop`_ has said that he would
|
||||
like to make use of this feature in his project.
|
||||
|
||||
0.3.6
|
||||
=====
|
||||
|
||||
* Fix for `#200`_ (!!) where the API wasn't configured to allow updating the
|
||||
correspondent or the tags for a document.
|
||||
* The ``content`` field is now optional, to allow for the edge case of a
|
||||
purely graphical document.
|
||||
* You can no longer add documents via the admin. This never worked in the
|
||||
first place, so all I've done here is remove the link to the broken form.
|
||||
* The consumer code has been heavily refactored to support a pluggable
|
||||
interface. Install a paperless consumer via pip and tell paperless about
|
||||
it with an environment variable, and you're good to go. Proper
|
||||
documentation is on its way.
|
||||
|
||||
0.3.5
|
||||
=====
|
||||
|
||||
* A serious facelift for the documents listing page wherein we drop the
|
||||
tabular layout in favour of a tiled interface.
|
||||
* Users can now configure the number of items per page.
|
||||
* Fix for `#171`_: Allow users to specify their own ``SECRET_KEY`` value.
|
||||
* Moved the dotenv loading to the top of settings.py
|
||||
* Fix for `#112`_: Added checks for binaries required for document
|
||||
consumption.
|
||||
|
||||
0.3.4
|
||||
=====
|
||||
|
||||
* Removal of django-suit due to a licensing conflict I bumped into in 0.3.3.
|
||||
Note that you *can* use Django Suit with Paperless, but only in a
|
||||
non-profit situation as their free license prohibits for-profit use. As a
|
||||
result, I can't bundle Suit with Paperless without conflicting with the
|
||||
GPL. Further development will be done against the stock Django admin.
|
||||
* I shrunk the thumbnails a little 'cause they were too big for me, even on
|
||||
my high-DPI monitor.
|
||||
* BasicAuth support for document and thumbnail downloads, as well as the Push
|
||||
API thanks to @thomasbrueggemann. See `#179`_.
|
||||
|
||||
0.3.3
|
||||
=====
|
||||
|
||||
* Thumbnails in the UI and a Django-suit -based face-lift courtesy of @ekw!
|
||||
* Timezone, items per page, and default language are now all configurable,
|
||||
also thanks to @ekw.
|
||||
|
||||
0.3.2
|
||||
=====
|
||||
|
||||
* Fix for `#172`_: defaulting ALLOWED_HOSTS to ``["*"]`` and allowing the
|
||||
user to set her own value via ``PAPERLESS_ALLOWED_HOSTS`` should the need
|
||||
arise.
|
||||
|
||||
0.3.1
|
||||
=====
|
||||
|
||||
* Added a default value for ``CONVERT_BINARY``
|
||||
|
||||
0.3.0
|
||||
=====
|
||||
|
||||
* Updated to using django-filter 1.x
|
||||
* Added some system checks so new users aren't confused by misconfigurations.
|
||||
* Consumer loop time is now configurable for systems with slow writes. Just
|
||||
set ``PAPERLESS_CONSUMER_LOOP_TIME`` to a number of seconds. The default
|
||||
is 10.
|
||||
* As per `#44`_, we've removed support for ``PAPERLESS_CONVERT``,
|
||||
``PAPERLESS_CONSUME``, and ``PAPERLESS_SECRET``. Please use
|
||||
``PAPERLESS_CONVERT_BINARY``, ``PAPERLESS_CONSUMPTION_DIR``, and
|
||||
``PAPERLESS_SHARED_SECRET`` respectively instead.
|
||||
|
||||
0.2.0
|
||||
=====
|
||||
|
||||
* `#150`_: The media root is now a variable you can set in
|
||||
``paperless.conf``.
|
||||
* `#148`_: The database location (sqlite) is now a variable you can set in
|
||||
``paperless.conf``.
|
||||
* `#146`_: Fixed a bug that allowed unauthorised access to the ``/fetch``
|
||||
URL.
|
||||
* `#131`_: Document files are now automatically removed from disk when
|
||||
they're deleted in Paperless.
|
||||
* `#121`_: Fixed a bug where Paperless wasn't setting document creation time
|
||||
based on the file naming scheme.
|
||||
* `#81`_: Added a hook to run an arbitrary script after every document is
|
||||
consumed.
|
||||
* `#98`_: Added optional environment variables for ImageMagick so that it
|
||||
doesn't explode when handling Very Large Documents or when it's just
|
||||
running on a low-memory system. Thanks to `Florian Harr`_ for his help on
|
||||
this one.
|
||||
* `#89`_ Ported the auto-tagging code to correspondents as well. Thanks to
|
||||
`Justin Snyman`_ for the pointers in the issue queue.
|
||||
* Added support for guessing the date from the file name along with the
|
||||
correspondent, title, and tags. Thanks to `Tikitu de Jager`_ for his pull
|
||||
request that I took forever to merge and to `Pit`_ for his efforts on the
|
||||
regex front.
|
||||
* `#94`_: Restored support for changing the created date in the UI. Thanks
|
||||
to `Martin Honermeyer`_ and `Tim White`_ for working with me on this.
|
||||
|
||||
0.1.1
|
||||
=====
|
||||
|
||||
* Potentially **Breaking Change**: All references to "sender" in the code
|
||||
have been renamed to "correspondent" to better reflect the nature of the
|
||||
property (one could quite reasonably scan a document before sending it to
|
||||
someone.)
|
||||
* `#67`_: Rewrote the document exporter and added a new importer that allows
|
||||
for full metadata retention without depending on the file name and
|
||||
modification time. A big thanks to `Tikitu de Jager`_, `Pit`_,
|
||||
`Florian Jung`_, and `Christopher Luu`_ for their code snippets and
|
||||
contributing conversation that lead to this change.
|
||||
* `#20`_: Added *unpaper* support to help in cleaning up the scanned image
|
||||
before it's OCR'd. Thanks to `Pit`_ for this one.
|
||||
* `#71`_ Added (encrypted) thumbnails in anticipation of a proper UI.
|
||||
* `#68`_: Added support for using a proper config file at
|
||||
``/etc/paperless.conf`` and modified the systemd unit files to use it.
|
||||
* Refactored the Vagrant installation process to use environment variables
|
||||
rather than asking the user to modify ``settings.py``.
|
||||
* `#44`_: Harmonise environment variable names with constant names.
|
||||
* `#60`_: Setup logging to actually use the Python native logging framework.
|
||||
* `#53`_: Fixed an annoying bug that caused ``.jpeg`` and ``.JPG`` images
|
||||
to be imported but made unavailable.
|
||||
|
||||
0.1.0
|
||||
=====
|
||||
|
||||
* Docker support! Big thanks to `Wayne Werner`_, `Brian Conn`_, and
|
||||
`Tikitu de Jager`_ for this one, and especially to `Pit`_
|
||||
who spearheadded this effort.
|
||||
* A simple REST API is in place, but it should be considered unstable.
|
||||
* Cleaned up the consumer to use temporary directories instead of a single
|
||||
scratch space. (Thanks `Pit`_)
|
||||
* Improved the efficiency of the consumer by parsing pages more intelligently
|
||||
and introducing a threaded OCR process (thanks again `Pit`_).
|
||||
* `#45`_: Cleaned up the logic for tag matching. Reported by `darkmatter`_.
|
||||
* `#47`_: Auto-rotate landscape documents. Reported by `Paul`_ and fixed by
|
||||
`Pit`_.
|
||||
* `#48`_: Matching algorithms should do so on a word boundary (`darkmatter`_)
|
||||
* `#54`_: Documented the re-tagger (`zedster`_)
|
||||
* `#57`_: Make sure file is preserved on import failure (`darkmatter`_)
|
||||
* Added tox with pep8 checking
|
||||
|
||||
0.0.6
|
||||
=====
|
||||
|
||||
* Added support for parallel OCR (significant work from `Pit`_)
|
||||
* Sped up the language detection (significant work from `Pit`_)
|
||||
* Added simple logging
|
||||
|
||||
0.0.5
|
||||
=====
|
||||
|
||||
* Added support for image files as documents (png, jpg, gif, tiff)
|
||||
* Added a crude means of HTTP POST for document imports
|
||||
* Added IMAP mail support
|
||||
* Added a re-tagging utility
|
||||
* Documentation for the above as well as data migration
|
||||
|
||||
0.0.4
|
||||
=====
|
||||
|
||||
* Added automated tagging basted on keyword matching
|
||||
* Cleaned up the document listing page
|
||||
* Removed ``User`` and ``Group`` from the admin
|
||||
* Added ``pytz`` to the list of requirements
|
||||
|
||||
0.0.3
|
||||
=====
|
||||
|
||||
* Added basic tagging
|
||||
|
||||
0.0.2
|
||||
=====
|
||||
|
||||
* Added language detection
|
||||
* Added datestamps to ``document_exporter``.
|
||||
* Changed ``settings.TESSERACT_LANGUAGE`` to ``settings.OCR_LANGUAGE``.
|
||||
|
||||
0.0.1
|
||||
=====
|
||||
|
||||
* Initial release
|
||||
|
||||
.. _Brian Conn: https://github.com/TheConnMan
|
||||
.. _Christopher Luu: https://github.com/nuudles
|
||||
.. _Florian Jung: https://github.com/the01
|
||||
.. _Tikitu de Jager: https://github.com/tikitu
|
||||
.. _Paul: https://github.com/polo2ro
|
||||
.. _Pit: https://github.com/pitkley
|
||||
.. _Wayne Werner: https://github.com/waynew
|
||||
.. _darkmatter: https://github.com/darkmatter
|
||||
.. _zedster: https://github.com/zedster
|
||||
.. _Martin Honermeyer: https://github.com/djmaze
|
||||
.. _Tim White: https://github.com/timwhite
|
||||
.. _Florian Harr: https://github.com/evils
|
||||
.. _Justin Snyman: https://github.com/stringlytyped
|
||||
.. _Thomas Brueggemann: https://github.com/thomasbrueggemann
|
||||
.. _Jake Gysland: https://github.com/jgysland
|
||||
.. _Strubbl: https://github.com/strubbl
|
||||
.. _CkuT: https://github.com/CkuT
|
||||
.. _David Martin: https://github.com/ddddavidmartin
|
||||
.. _Paperless Desktop: https://github.com/thomasbrueggemann/paperless-desktop
|
||||
.. _Joshua Gilman: https://github.com/jmgilman
|
||||
.. _ayounggun: https://github.com/ayounggun
|
||||
.. _Kusti Skytén: https://github.com/kskyten
|
||||
.. _maphy-psd: https://github.com/maphy-psd
|
||||
.. _ishirav: https://github.com/ishirav
|
||||
.. _Stefan Hagen: https://github.com/xkpd3
|
||||
.. _dev-rke: https://github.com/dev-rke
|
||||
.. _Lukas Winkler: https://github.com/Findus23
|
||||
.. _chris-aeviator: https://github.com/chris-aeviator
|
||||
.. _Dan Panzarella: https://github.com/pzl
|
||||
.. _addadi: https://github.com/addadi
|
||||
.. _BastianPoe: https://github.com/BastianPoe
|
||||
.. _matthewmoto: https://github.com/matthewmoto
|
||||
.. _Isaac: https://github.com/isaacsando
|
||||
.. _Georgi Todorov: https://github.com/TeraHz
|
||||
.. _Jeffrey Portman: https://github.com/ChromoX
|
||||
.. _Simon Taddiken: https://github.com/skuzzle
|
||||
.. _Quentin Dawans: https://github.com/ovv
|
||||
.. _Martin Arendtsen: https://github.com/Arendtsen
|
||||
.. _erikarvstedt: https://github.com/erikarvstedt
|
||||
.. _Kyle Lucy: https://github.com/kmlucy
|
||||
.. _thinkjk: https://github.com/thinkjk
|
||||
.. _mcronce: https://github.com/mcronce
|
||||
.. _Enno Lohmeier: https://github.com/elohmeier
|
||||
.. _Mark McFate: https://github.com/SummittDweller
|
||||
.. _dadosch: https://github.com/dadosch
|
||||
.. _Wolfgang Mader: https://github.com/wmader
|
||||
.. _Tim Brooks: https://github.com/brookst
|
||||
.. _Stéphane Brunner: https://github.com/sbrunner
|
||||
.. _Kilian Koeltzsch: https://github.com/kiliankoe
|
||||
.. _Lukasz Soluch: https://github.com/LukaszSolo
|
||||
.. _Joshua Taillon: https://github.com/jat255
|
||||
.. _dubit0: https://github.com/dubit0
|
||||
.. _ahyear: https://github.com/ahyear
|
||||
.. _jonaswinkler: https://github.com/jonaswinkler
|
||||
.. _thepill: https://github.com/thepill
|
||||
.. _Andrew Peng: https://github.com/pengc99
|
||||
.. _euri10: https://github.com/euri10
|
||||
.. _Ulli: https://github.com/Ulli2k
|
||||
.. _tsia: https://github.com/tsia
|
||||
.. _Sblop: https://github.com/Sblop
|
||||
|
||||
.. _#20: https://github.com/danielquinn/paperless/issues/20
|
||||
.. _#44: https://github.com/danielquinn/paperless/issues/44
|
||||
.. _#45: https://github.com/danielquinn/paperless/issues/45
|
||||
.. _#47: https://github.com/danielquinn/paperless/issues/47
|
||||
.. _#48: https://github.com/danielquinn/paperless/issues/48
|
||||
.. _#53: https://github.com/danielquinn/paperless/issues/53
|
||||
.. _#54: https://github.com/danielquinn/paperless/issues/54
|
||||
.. _#57: https://github.com/danielquinn/paperless/issues/57
|
||||
.. _#60: https://github.com/danielquinn/paperless/issues/60
|
||||
.. _#67: https://github.com/danielquinn/paperless/issues/67
|
||||
.. _#68: https://github.com/danielquinn/paperless/issues/68
|
||||
.. _#71: https://github.com/danielquinn/paperless/issues/71
|
||||
.. _#81: https://github.com/danielquinn/paperless/issues/81
|
||||
.. _#89: https://github.com/danielquinn/paperless/issues/89
|
||||
.. _#94: https://github.com/danielquinn/paperless/issues/94
|
||||
.. _#98: https://github.com/danielquinn/paperless/issues/98
|
||||
.. _#112: https://github.com/danielquinn/paperless/issues/112
|
||||
.. _#121: https://github.com/danielquinn/paperless/issues/121
|
||||
.. _#131: https://github.com/danielquinn/paperless/issues/131
|
||||
.. _#146: https://github.com/danielquinn/paperless/issues/146
|
||||
.. _#148: https://github.com/danielquinn/paperless/pull/148
|
||||
.. _#150: https://github.com/danielquinn/paperless/pull/150
|
||||
.. _#158: https://github.com/danielquinn/paperless/issues/158
|
||||
.. _#171: https://github.com/danielquinn/paperless/issues/171
|
||||
.. _#172: https://github.com/danielquinn/paperless/issues/172
|
||||
.. _#179: https://github.com/danielquinn/paperless/pull/179
|
||||
.. _#199: https://github.com/danielquinn/paperless/issues/199
|
||||
.. _#200: https://github.com/danielquinn/paperless/issues/200
|
||||
.. _#206: https://github.com/danielquinn/paperless/issues/206
|
||||
.. _#212: https://github.com/danielquinn/paperless/pull/212
|
||||
.. _#220: https://github.com/danielquinn/paperless/pull/220
|
||||
.. _#224: https://github.com/danielquinn/paperless/pull/224
|
||||
.. _#226: https://github.com/danielquinn/paperless/pull/226
|
||||
.. _#227: https://github.com/danielquinn/paperless/pull/227
|
||||
.. _#228: https://github.com/danielquinn/paperless/pull/228
|
||||
.. _#229: https://github.com/danielquinn/paperless/pull/229
|
||||
.. _#230: https://github.com/danielquinn/paperless/pull/230
|
||||
.. _#232: https://github.com/danielquinn/paperless/issues/232
|
||||
.. _#235: https://github.com/danielquinn/paperless/issues/235
|
||||
.. _#236: https://github.com/danielquinn/paperless/issues/236
|
||||
.. _#255: https://github.com/danielquinn/paperless/pull/255
|
||||
.. _#268: https://github.com/danielquinn/paperless/pull/268
|
||||
.. _#277: https://github.com/danielquinn/paperless/pull/277
|
||||
.. _#272: https://github.com/danielquinn/paperless/issues/272
|
||||
.. _#248: https://github.com/danielquinn/paperless/issues/248
|
||||
.. _#278: https://github.com/danielquinn/paperless/issues/248
|
||||
.. _#283: https://github.com/danielquinn/paperless/issues/283
|
||||
.. _#256: https://github.com/danielquinn/paperless/pull/256
|
||||
.. _#285: https://github.com/danielquinn/paperless/pull/285
|
||||
.. _#291: https://github.com/danielquinn/paperless/pull/291
|
||||
.. _#295: https://github.com/danielquinn/paperless/pull/295
|
||||
.. _#299: https://github.com/danielquinn/paperless/issues/299
|
||||
.. _#300: https://github.com/danielquinn/paperless/pull/300
|
||||
.. _#301: https://github.com/danielquinn/paperless/issues/301
|
||||
.. _#303: https://github.com/danielquinn/paperless/issues/303
|
||||
.. _#305: https://github.com/danielquinn/paperless/issues/305
|
||||
.. _#306: https://github.com/danielquinn/paperless/issues/306
|
||||
.. _#308: https://github.com/danielquinn/paperless/issues/308
|
||||
.. _#311: https://github.com/danielquinn/paperless/pull/311
|
||||
.. _#312: https://github.com/danielquinn/paperless/pull/312
|
||||
.. _#313: https://github.com/danielquinn/paperless/pull/313
|
||||
.. _#322: https://github.com/danielquinn/paperless/pull/322
|
||||
.. _#328: https://github.com/danielquinn/paperless/pull/328
|
||||
.. _#253: https://github.com/danielquinn/paperless/issues/253
|
||||
.. _#262: https://github.com/danielquinn/paperless/issues/262
|
||||
.. _#323: https://github.com/danielquinn/paperless/issues/323
|
||||
.. _#344: https://github.com/danielquinn/paperless/pull/344
|
||||
.. _#351: https://github.com/danielquinn/paperless/pull/351
|
||||
.. _#352: https://github.com/danielquinn/paperless/pull/352
|
||||
.. _#354: https://github.com/danielquinn/paperless/issues/354
|
||||
.. _#371: https://github.com/danielquinn/paperless/issues/371
|
||||
.. _#374: https://github.com/danielquinn/paperless/pull/374
|
||||
.. _#375: https://github.com/danielquinn/paperless/pull/375
|
||||
.. _#376: https://github.com/danielquinn/paperless/pull/376
|
||||
.. _#383: https://github.com/danielquinn/paperless/pull/383
|
||||
.. _#384: https://github.com/danielquinn/paperless/issues/384
|
||||
.. _#386: https://github.com/danielquinn/paperless/issues/386
|
||||
.. _#387: https://github.com/danielquinn/paperless/pull/387
|
||||
.. _#391: https://github.com/danielquinn/paperless/pull/391
|
||||
.. _#390: https://github.com/danielquinn/paperless/pull/390
|
||||
.. _#392: https://github.com/danielquinn/paperless/issues/392
|
||||
.. _#393: https://github.com/danielquinn/paperless/issues/393
|
||||
.. _#395: https://github.com/danielquinn/paperless/pull/395
|
||||
.. _#394: https://github.com/danielquinn/paperless/issues/394
|
||||
.. _#396: https://github.com/danielquinn/paperless/pull/396
|
||||
.. _#399: https://github.com/danielquinn/paperless/pull/399
|
||||
.. _#400: https://github.com/danielquinn/paperless/pull/400
|
||||
.. _#401: https://github.com/danielquinn/paperless/pull/401
|
||||
.. _#405: https://github.com/danielquinn/paperless/pull/405
|
||||
.. _#406: https://github.com/danielquinn/paperless/issues/406
|
||||
.. _#412: https://github.com/danielquinn/paperless/issues/412
|
||||
.. _#413: https://github.com/danielquinn/paperless/pull/413
|
||||
.. _#414: https://github.com/danielquinn/paperless/issues/414
|
||||
.. _#423: https://github.com/danielquinn/paperless/issues/423
|
||||
.. _#433: https://github.com/danielquinn/paperless/issues/433
|
||||
.. _#440: https://github.com/danielquinn/paperless/pull/440
|
||||
.. _#441: https://github.com/danielquinn/paperless/pull/441
|
||||
.. _#442: https://github.com/danielquinn/paperless/pull/442
|
||||
|
||||
.. _pipenv: https://docs.pipenv.org/
|
||||
.. _a new home on Docker Hub: https://hub.docker.com/r/danielquinn/paperless/
|
||||
.. _optipng: http://optipng.sourceforge.net/
|
20
docs/changelog_jonaswinkler.rst
Normal file
@@ -0,0 +1,20 @@
|
||||
Changelog (jonaswinkler)
|
||||
########################
|
||||
|
||||
1.0.0
|
||||
=====
|
||||
|
||||
* First release based on paperless 2.6.0
|
||||
* Added: Automatic document classification using neural networks (replaces
|
||||
regex-based tagging)
|
||||
* Added: Document types
|
||||
* Added: Archive serial number allows easy referencing of physical document
|
||||
copies
|
||||
* Added: Inbox tags (added automatically to newly consumed documents)
|
||||
* Added: Document viewer on document edit page
|
||||
* Database backend is now configurable
|
||||
|
||||
1.0.1
|
||||
=====
|
||||
|
||||
* Fixed migration order
|
255
docs/conf.py
@@ -1,40 +1,64 @@
|
||||
import sphinx_rtd_theme
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Paperless documentation build configuration file, created by
|
||||
# sphinx-quickstart on Mon Oct 26 18:36:52 2015.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its
|
||||
# containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
__version__ = None
|
||||
__full_version_str__ = None
|
||||
__major_minor_version_str__ = None
|
||||
exec(open("../src/paperless/version.py").read())
|
||||
|
||||
|
||||
# Believe it or not, this is the officially sanctioned way to add custom CSS.
|
||||
def setup(app):
|
||||
app.add_stylesheet("custom.css")
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
"sphinx.ext.autodoc",
|
||||
"sphinx.ext.intersphinx",
|
||||
"sphinx.ext.todo",
|
||||
"sphinx.ext.imgmath",
|
||||
"sphinx.ext.viewcode",
|
||||
"sphinx_rtd_theme",
|
||||
"myst_parser",
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.intersphinx',
|
||||
'sphinx.ext.todo',
|
||||
'sphinx.ext.imgmath',
|
||||
'sphinx.ext.viewcode',
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ["_templates"]
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = {
|
||||
".rst": "restructuredtext",
|
||||
".md": "markdown",
|
||||
}
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
# source_encoding = 'utf-8-sig'
|
||||
#source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = "index"
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = "Paperless-ngx"
|
||||
copyright = "2015-2022, Daniel Quinn, Jonas Winkler, and the paperless-ngx team"
|
||||
project = u'Paperless'
|
||||
copyright = u'2015, Daniel Quinn'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
@@ -47,190 +71,199 @@ copyright = "2015-2022, Daniel Quinn, Jonas Winkler, and the paperless-ngx team"
|
||||
#
|
||||
|
||||
# The short X.Y version.
|
||||
version = __major_minor_version_str__
|
||||
version = ".".join([str(_) for _ in __version__[:2]])
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = __full_version_str__
|
||||
release = ".".join([str(_) for _ in __version__[:3]])
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
# language = None
|
||||
#language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
# today = ''
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
# today_fmt = '%B %d, %Y'
|
||||
#today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ["_build"]
|
||||
exclude_patterns = ['_build']
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all
|
||||
# documents.
|
||||
# default_role = None
|
||||
#default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
# add_function_parentheses = True
|
||||
#add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
# add_module_names = True
|
||||
#add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
# show_authors = False
|
||||
#show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = "sphinx"
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
# modindex_common_prefix = []
|
||||
#modindex_common_prefix = []
|
||||
|
||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||
# keep_warnings = False
|
||||
#keep_warnings = False
|
||||
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = "sphinx_rtd_theme"
|
||||
html_theme = 'default'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
# html_theme_options = {}
|
||||
#html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
# html_title = None
|
||||
#html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
# html_short_title = None
|
||||
#html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
# html_logo = None
|
||||
#html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
# html_favicon = None
|
||||
#html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ["_static"]
|
||||
|
||||
# These paths are either relative to html_static_path
|
||||
# or fully qualified paths (eg. https://...)
|
||||
html_css_files = [
|
||||
"css/custom.css",
|
||||
]
|
||||
|
||||
html_js_files = [
|
||||
"js/darkmode.js",
|
||||
]
|
||||
html_static_path = ['_static']
|
||||
|
||||
# Add any extra paths that contain custom files (such as robots.txt or
|
||||
# .htaccess) here, relative to this directory. These files are copied
|
||||
# directly to the root of the documentation.
|
||||
# html_extra_path = []
|
||||
#html_extra_path = []
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
# html_last_updated_fmt = '%b %d, %Y'
|
||||
#html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
# html_use_smartypants = True
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
# html_sidebars = {}
|
||||
#html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
# html_additional_pages = {}
|
||||
#html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
# html_domain_indices = True
|
||||
#html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
# html_use_index = True
|
||||
#html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
# html_split_index = False
|
||||
#html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
# html_show_sourcelink = True
|
||||
#html_show_sourcelink = True
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
# html_show_sphinx = True
|
||||
#html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
# html_show_copyright = True
|
||||
#html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
# html_use_opensearch = ''
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
# html_file_suffix = None
|
||||
#html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = "paperless"
|
||||
htmlhelp_basename = 'paperless'
|
||||
|
||||
|
||||
#
|
||||
# Attempt to use the ReadTheDocs theme. If it's not installed, fallback to
|
||||
# the default.
|
||||
#
|
||||
|
||||
try:
|
||||
import sphinx_rtd_theme
|
||||
html_theme = "sphinx_rtd_theme"
|
||||
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# -- Options for LaTeX output ---------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#'papersize': 'letterpaper',
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#'pointsize': '10pt',
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#'preamble': '',
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#'preamble': '',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
("index", "paperless.tex", "Paperless Documentation", "Daniel Quinn", "manual"),
|
||||
('index', 'paperless.tex', u'Paperless Documentation',
|
||||
u'Daniel Quinn', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
# latex_logo = None
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
# latex_use_parts = False
|
||||
#latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
# latex_show_pagerefs = False
|
||||
#latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# latex_show_urls = False
|
||||
#latex_show_urls = False
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# latex_appendices = []
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# latex_domain_indices = True
|
||||
#latex_domain_indices = True
|
||||
|
||||
|
||||
# -- Options for manual page output ---------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [("index", "paperless", "Paperless Documentation", ["Daniel Quinn"], 1)]
|
||||
man_pages = [
|
||||
('index', 'paperless', u'Paperless Documentation',
|
||||
[u'Daniel Quinn'], 1)
|
||||
]
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# man_show_urls = False
|
||||
#man_show_urls = False
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
@@ -239,99 +272,93 @@ man_pages = [("index", "paperless", "Paperless Documentation", ["Daniel Quinn"],
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(
|
||||
"index",
|
||||
"Paperless",
|
||||
"Paperless Documentation",
|
||||
"Daniel Quinn",
|
||||
"paperless",
|
||||
"Scan, index, and archive all of your paper documents.",
|
||||
"Miscellaneous",
|
||||
),
|
||||
('index', 'Paperless', u'Paperless Documentation',
|
||||
u'Daniel Quinn', 'paperless', 'Scan, index, and archive all of your paper documents.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# texinfo_appendices = []
|
||||
#texinfo_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# texinfo_domain_indices = True
|
||||
#texinfo_domain_indices = True
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
# texinfo_show_urls = 'footnote'
|
||||
#texinfo_show_urls = 'footnote'
|
||||
|
||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||
# texinfo_no_detailmenu = False
|
||||
#texinfo_no_detailmenu = False
|
||||
|
||||
|
||||
# -- Options for Epub output ----------------------------------------------
|
||||
|
||||
# Bibliographic Dublin Core info.
|
||||
epub_title = "Paperless"
|
||||
epub_author = "Daniel Quinn"
|
||||
epub_publisher = "Daniel Quinn"
|
||||
epub_copyright = "2015, Daniel Quinn"
|
||||
epub_title = u'Paperless'
|
||||
epub_author = u'Daniel Quinn'
|
||||
epub_publisher = u'Daniel Quinn'
|
||||
epub_copyright = u'2015, Daniel Quinn'
|
||||
|
||||
# The basename for the epub file. It defaults to the project name.
|
||||
# epub_basename = u'Paperless'
|
||||
#epub_basename = u'Paperless'
|
||||
|
||||
# The HTML theme for the epub output. Since the default themes are not optimized
|
||||
# for small screen space, using the same theme for HTML and epub output is
|
||||
# usually not wise. This defaults to 'epub', a theme designed to save visual
|
||||
# space.
|
||||
# epub_theme = 'epub'
|
||||
#epub_theme = 'epub'
|
||||
|
||||
# The language of the text. It defaults to the language option
|
||||
# or en if the language is not set.
|
||||
# epub_language = ''
|
||||
#epub_language = ''
|
||||
|
||||
# The scheme of the identifier. Typical schemes are ISBN or URL.
|
||||
# epub_scheme = ''
|
||||
#epub_scheme = ''
|
||||
|
||||
# The unique identifier of the text. This can be a ISBN number
|
||||
# or the project homepage.
|
||||
# epub_identifier = ''
|
||||
#epub_identifier = ''
|
||||
|
||||
# A unique identification for the text.
|
||||
# epub_uid = ''
|
||||
#epub_uid = ''
|
||||
|
||||
# A tuple containing the cover image and cover page html template filenames.
|
||||
# epub_cover = ()
|
||||
#epub_cover = ()
|
||||
|
||||
# A sequence of (type, uri, title) tuples for the guide element of content.opf.
|
||||
# epub_guide = ()
|
||||
#epub_guide = ()
|
||||
|
||||
# HTML files that should be inserted before the pages created by sphinx.
|
||||
# The format is a list of tuples containing the path and title.
|
||||
# epub_pre_files = []
|
||||
#epub_pre_files = []
|
||||
|
||||
# HTML files shat should be inserted after the pages created by sphinx.
|
||||
# The format is a list of tuples containing the path and title.
|
||||
# epub_post_files = []
|
||||
#epub_post_files = []
|
||||
|
||||
# A list of files that should not be packed into the epub file.
|
||||
epub_exclude_files = ["search.html"]
|
||||
epub_exclude_files = ['search.html']
|
||||
|
||||
# The depth of the table of contents in toc.ncx.
|
||||
# epub_tocdepth = 3
|
||||
#epub_tocdepth = 3
|
||||
|
||||
# Allow duplicate toc entries.
|
||||
# epub_tocdup = True
|
||||
#epub_tocdup = True
|
||||
|
||||
# Choose between 'default' and 'includehidden'.
|
||||
# epub_tocscope = 'default'
|
||||
#epub_tocscope = 'default'
|
||||
|
||||
# Fix unsupported image types using the PIL.
|
||||
# epub_fix_images = False
|
||||
#epub_fix_images = False
|
||||
|
||||
# Scale large images.
|
||||
# epub_max_image_width = 0
|
||||
#epub_max_image_width = 0
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
# epub_show_urls = 'inline'
|
||||
#epub_show_urls = 'inline'
|
||||
|
||||
# If false, no index is generated.
|
||||
# epub_use_index = True
|
||||
#epub_use_index = True
|
||||
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {"http://docs.python.org/": None}
|
||||
intersphinx_mapping = {'http://docs.python.org/': None}
|
||||
|
@@ -1,879 +0,0 @@
|
||||
.. _configuration:
|
||||
|
||||
*************
|
||||
Configuration
|
||||
*************
|
||||
|
||||
Paperless provides a wide range of customizations.
|
||||
Depending on how you run paperless, these settings have to be defined in different
|
||||
places.
|
||||
|
||||
* If you run paperless on docker, ``paperless.conf`` is not used. Rather, configure
|
||||
paperless by copying necessary options to ``docker-compose.env``.
|
||||
* If you are running paperless on anything else, paperless will search for the
|
||||
configuration file in these locations and use the first one it finds:
|
||||
|
||||
.. code::
|
||||
|
||||
/path/to/paperless/paperless.conf
|
||||
/etc/paperless.conf
|
||||
/usr/local/etc/paperless.conf
|
||||
|
||||
|
||||
Required services
|
||||
#################
|
||||
|
||||
PAPERLESS_REDIS=<url>
|
||||
This is required for processing scheduled tasks such as email fetching, index
|
||||
optimization and for training the automatic document matcher.
|
||||
|
||||
Defaults to redis://localhost:6379.
|
||||
|
||||
PAPERLESS_DBHOST=<hostname>
|
||||
By default, sqlite is used as the database backend. This can be changed here.
|
||||
Set PAPERLESS_DBHOST and PostgreSQL will be used instead of mysql.
|
||||
|
||||
PAPERLESS_DBPORT=<port>
|
||||
Adjust port if necessary.
|
||||
|
||||
Default is 5432.
|
||||
|
||||
PAPERLESS_DBNAME=<name>
|
||||
Database name in PostgreSQL.
|
||||
|
||||
Defaults to "paperless".
|
||||
|
||||
PAPERLESS_DBUSER=<name>
|
||||
Database user in PostgreSQL.
|
||||
|
||||
Defaults to "paperless".
|
||||
|
||||
PAPERLESS_DBPASS=<password>
|
||||
Database password for PostgreSQL.
|
||||
|
||||
Defaults to "paperless".
|
||||
|
||||
PAPERLESS_DBSSLMODE=<mode>
|
||||
SSL mode to use when connecting to PostgreSQL.
|
||||
|
||||
See `the official documentation about sslmode <https://www.postgresql.org/docs/current/libpq-ssl.html>`_.
|
||||
|
||||
Default is ``prefer``.
|
||||
|
||||
Paths and folders
|
||||
#################
|
||||
|
||||
PAPERLESS_CONSUMPTION_DIR=<path>
|
||||
This where your documents should go to be consumed. Make sure that it exists
|
||||
and that the user running the paperless service can read/write its contents
|
||||
before you start Paperless.
|
||||
|
||||
Don't change this when using docker, as it only changes the path within the
|
||||
container. Change the local consumption directory in the docker-compose.yml
|
||||
file instead.
|
||||
|
||||
Defaults to "../consume/", relative to the "src" directory.
|
||||
|
||||
PAPERLESS_DATA_DIR=<path>
|
||||
This is where paperless stores all its data (search index, SQLite database,
|
||||
classification model, etc).
|
||||
|
||||
Defaults to "../data/", relative to the "src" directory.
|
||||
|
||||
PAPERLESS_TRASH_DIR=<path>
|
||||
Instead of removing deleted documents, they are moved to this directory.
|
||||
|
||||
This must be writeable by the user running paperless. When running inside
|
||||
docker, ensure that this path is within a permanent volume (such as
|
||||
"../media/trash") so it won't get lost on upgrades.
|
||||
|
||||
Defaults to empty (i.e. really delete documents).
|
||||
|
||||
PAPERLESS_MEDIA_ROOT=<path>
|
||||
This is where your documents and thumbnails are stored.
|
||||
|
||||
You can set this and PAPERLESS_DATA_DIR to the same folder to have paperless
|
||||
store all its data within the same volume.
|
||||
|
||||
Defaults to "../media/", relative to the "src" directory.
|
||||
|
||||
PAPERLESS_STATICDIR=<path>
|
||||
Override the default STATIC_ROOT here. This is where all static files
|
||||
created using "collectstatic" manager command are stored.
|
||||
|
||||
Unless you're doing something fancy, there is no need to override this.
|
||||
|
||||
Defaults to "../static/", relative to the "src" directory.
|
||||
|
||||
PAPERLESS_FILENAME_FORMAT=<format>
|
||||
Changes the filenames paperless uses to store documents in the media directory.
|
||||
See :ref:`advanced-file_name_handling` for details.
|
||||
|
||||
Default is none, which disables this feature.
|
||||
|
||||
PAPERLESS_FILENAME_FORMAT_REMOVE_NONE=<bool>
|
||||
Tells paperless to replace placeholders in `PAPERLESS_FILENAME_FORMAT` that would resolve
|
||||
to 'none' to be omitted from the resulting filename. This also holds true for directory
|
||||
names.
|
||||
See :ref:`advanced-file_name_handling` for details.
|
||||
|
||||
Defaults to `false` which disables this feature.
|
||||
|
||||
PAPERLESS_LOGGING_DIR=<path>
|
||||
This is where paperless will store log files.
|
||||
|
||||
Defaults to "``PAPERLESS_DATA_DIR``/log/".
|
||||
|
||||
|
||||
Logging
|
||||
#######
|
||||
|
||||
PAPERLESS_LOGROTATE_MAX_SIZE=<num>
|
||||
Maximum file size for log files before they are rotated, in bytes.
|
||||
|
||||
Defaults to 1 MiB.
|
||||
|
||||
PAPERLESS_LOGROTATE_MAX_BACKUPS=<num>
|
||||
Number of rotated log files to keep.
|
||||
|
||||
Defaults to 20.
|
||||
|
||||
.. _hosting-and-security:
|
||||
|
||||
Hosting & Security
|
||||
##################
|
||||
|
||||
PAPERLESS_SECRET_KEY=<key>
|
||||
Paperless uses this to make session tokens. If you expose paperless on the
|
||||
internet, you need to change this, since the default secret is well known.
|
||||
|
||||
Use any sequence of characters. The more, the better. You don't need to
|
||||
remember this. Just face-roll your keyboard.
|
||||
|
||||
Default is listed in the file ``src/paperless/settings.py``.
|
||||
|
||||
PAPERLESS_URL=<url>
|
||||
This setting can be used to set the three options below (ALLOWED_HOSTS,
|
||||
CORS_ALLOWED_HOSTS and CSRF_TRUSTED_ORIGINS). If the other options are
|
||||
set the values will be combined with this one. Do not include a trailing
|
||||
slash. E.g. https://paperless.domain.com
|
||||
|
||||
Defaults to empty string, leaving the other settings unaffected.
|
||||
|
||||
PAPERLESS_CSRF_TRUSTED_ORIGINS=<comma-separated-list>
|
||||
A list of trusted origins for unsafe requests (e.g. POST). As of Django 4.0
|
||||
this is required to access the Django admin via the web.
|
||||
See https://docs.djangoproject.com/en/4.0/ref/settings/#csrf-trusted-origins
|
||||
|
||||
Can also be set using PAPERLESS_URL (see above).
|
||||
|
||||
Defaults to empty string, which does not add any origins to the trusted list.
|
||||
|
||||
PAPERLESS_ALLOWED_HOSTS=<comma-separated-list>
|
||||
If you're planning on putting Paperless on the open internet, then you
|
||||
really should set this value to the domain name you're using. Failing to do
|
||||
so leaves you open to HTTP host header attacks:
|
||||
https://docs.djangoproject.com/en/3.1/topics/security/#host-header-validation
|
||||
|
||||
Just remember that this is a comma-separated list, so "example.com" is fine,
|
||||
as is "example.com,www.example.com", but NOT " example.com" or "example.com,"
|
||||
|
||||
Can also be set using PAPERLESS_URL (see above).
|
||||
|
||||
If manually set, please remember to include "localhost". Otherwise docker
|
||||
healthcheck will fail.
|
||||
|
||||
Defaults to "*", which is all hosts.
|
||||
|
||||
PAPERLESS_CORS_ALLOWED_HOSTS=<comma-separated-list>
|
||||
You need to add your servers to the list of allowed hosts that can do CORS
|
||||
calls. Set this to your public domain name.
|
||||
|
||||
Can also be set using PAPERLESS_URL (see above).
|
||||
|
||||
Defaults to "http://localhost:8000".
|
||||
|
||||
PAPERLESS_FORCE_SCRIPT_NAME=<path>
|
||||
To host paperless under a subpath url like example.com/paperless you set
|
||||
this value to /paperless. No trailing slash!
|
||||
|
||||
Defaults to none, which hosts paperless at "/".
|
||||
|
||||
PAPERLESS_STATIC_URL=<path>
|
||||
Override the STATIC_URL here. Unless you're hosting Paperless off a
|
||||
subdomain like /paperless/, you probably don't need to change this.
|
||||
|
||||
Defaults to "/static/".
|
||||
|
||||
PAPERLESS_AUTO_LOGIN_USERNAME=<username>
|
||||
Specify a username here so that paperless will automatically perform login
|
||||
with the selected user.
|
||||
|
||||
.. danger::
|
||||
|
||||
Do not use this when exposing paperless on the internet. There are no
|
||||
checks in place that would prevent you from doing this.
|
||||
|
||||
Defaults to none, which disables this feature.
|
||||
|
||||
PAPERLESS_ADMIN_USER=<username>
|
||||
If this environment variable is specified, Paperless automatically creates
|
||||
a superuser with the provided username at start. This is useful in cases
|
||||
where you can not run the `createsuperuser` command separately, such as Kubernetes
|
||||
or AWS ECS.
|
||||
|
||||
Requires `PAPERLESS_ADMIN_PASSWORD` to be set.
|
||||
|
||||
.. note::
|
||||
|
||||
This will not change an existing [super]user's password, nor will
|
||||
it recreate a user that already exists. You can leave this throughout
|
||||
the lifecycle of the containers.
|
||||
|
||||
PAPERLESS_ADMIN_MAIL=<email>
|
||||
(Optional) Specify superuser email address. Only used when
|
||||
`PAPERLESS_ADMIN_USER` is set.
|
||||
|
||||
Defaults to ``root@localhost``.
|
||||
|
||||
PAPERLESS_ADMIN_PASSWORD=<password>
|
||||
Only used when `PAPERLESS_ADMIN_USER` is set.
|
||||
This will be the password of the automatically created superuser.
|
||||
|
||||
|
||||
PAPERLESS_COOKIE_PREFIX=<str>
|
||||
Specify a prefix that is added to the cookies used by paperless to identify
|
||||
the currently logged in user. This is useful for when you're running two
|
||||
instances of paperless on the same host.
|
||||
|
||||
After changing this, you will have to login again.
|
||||
|
||||
Defaults to ``""``, which does not alter the cookie names.
|
||||
|
||||
PAPERLESS_ENABLE_HTTP_REMOTE_USER=<bool>
|
||||
Allows authentication via HTTP_REMOTE_USER which is used by some SSO
|
||||
applications.
|
||||
|
||||
.. warning::
|
||||
|
||||
This will allow authentication by simply adding a ``Remote-User: <username>`` header
|
||||
to a request. Use with care! You especially *must* ensure that any such header is not
|
||||
passed from your proxy server to paperless.
|
||||
|
||||
If you're exposing paperless to the internet directly, do not use this.
|
||||
|
||||
Also see the warning `in the official documentation <https://docs.djangoproject.com/en/3.1/howto/auth-remote-user/#configuration>`.
|
||||
|
||||
Defaults to `false` which disables this feature.
|
||||
|
||||
PAPERLESS_HTTP_REMOTE_USER_HEADER_NAME=<str>
|
||||
If `PAPERLESS_ENABLE_HTTP_REMOTE_USER` is enabled, this property allows to
|
||||
customize the name of the HTTP header from which the authenticated username
|
||||
is extracted. Values are in terms of
|
||||
[HttpRequest.META](https://docs.djangoproject.com/en/3.1/ref/request-response/#django.http.HttpRequest.META).
|
||||
Thus, the configured value must start with `HTTP_` followed by the
|
||||
normalized actual header name.
|
||||
|
||||
Defaults to `HTTP_REMOTE_USER`.
|
||||
|
||||
PAPERLESS_LOGOUT_REDIRECT_URL=<str>
|
||||
URL to redirect the user to after a logout. This can be used together with
|
||||
`PAPERLESS_ENABLE_HTTP_REMOTE_USER` to redirect the user back to the SSO
|
||||
application's logout page.
|
||||
|
||||
Defaults to None, which disables this feature.
|
||||
|
||||
.. _configuration-ocr:
|
||||
|
||||
OCR settings
|
||||
############
|
||||
|
||||
Paperless uses `OCRmyPDF <https://ocrmypdf.readthedocs.io/en/latest/>`_ for
|
||||
performing OCR on documents and images. Paperless uses sensible defaults for
|
||||
most settings, but all of them can be configured to your needs.
|
||||
|
||||
PAPERLESS_OCR_LANGUAGE=<lang>
|
||||
Customize the language that paperless will attempt to use when
|
||||
parsing documents.
|
||||
|
||||
It should be a 3-letter language code consistent with ISO
|
||||
639: https://www.loc.gov/standards/iso639-2/php/code_list.php
|
||||
|
||||
Set this to the language most of your documents are written in.
|
||||
|
||||
This can be a combination of multiple languages such as ``deu+eng``,
|
||||
in which case tesseract will use whatever language matches best.
|
||||
Keep in mind that tesseract uses much more cpu time with multiple
|
||||
languages enabled.
|
||||
|
||||
Defaults to "eng".
|
||||
|
||||
Note: If your language contains a '-' such as chi-sim, you must use chi_sim
|
||||
|
||||
PAPERLESS_OCR_MODE=<mode>
|
||||
Tell paperless when and how to perform ocr on your documents. Four modes
|
||||
are available:
|
||||
|
||||
* ``skip``: Paperless skips all pages and will perform ocr only on pages
|
||||
where no text is present. This is the safest option.
|
||||
* ``skip_noarchive``: In addition to skip, paperless won't create an
|
||||
archived version of your documents when it finds any text in them.
|
||||
This is useful if you don't want to have two almost-identical versions
|
||||
of your digital documents in the media folder. This is the fastest option.
|
||||
* ``redo``: Paperless will OCR all pages of your documents and attempt to
|
||||
replace any existing text layers with new text. This will be useful for
|
||||
documents from scanners that already performed OCR with insufficient
|
||||
results. It will also perform OCR on purely digital documents.
|
||||
|
||||
This option may fail on some documents that have features that cannot
|
||||
be removed, such as forms. In this case, the text from the document is
|
||||
used instead.
|
||||
* ``force``: Paperless rasterizes your documents, converting any text
|
||||
into images and puts the OCRed text on top. This works for all documents,
|
||||
however, the resulting document may be significantly larger and text
|
||||
won't appear as sharp when zoomed in.
|
||||
|
||||
The default is ``skip``, which only performs OCR when necessary and always
|
||||
creates archived documents.
|
||||
|
||||
Read more about this in the `OCRmyPDF documentation <https://ocrmypdf.readthedocs.io/en/latest/advanced.html#when-ocr-is-skipped>`_.
|
||||
|
||||
PAPERLESS_OCR_CLEAN=<mode>
|
||||
Tells paperless to use ``unpaper`` to clean any input document before
|
||||
sending it to tesseract. This uses more resources, but generally results
|
||||
in better OCR results. The following modes are available:
|
||||
|
||||
* ``clean``: Apply unpaper.
|
||||
* ``clean-final``: Apply unpaper, and use the cleaned images to build the
|
||||
output file instead of the original images.
|
||||
* ``none``: Do not apply unpaper.
|
||||
|
||||
Defaults to ``clean``.
|
||||
|
||||
.. note::
|
||||
|
||||
``clean-final`` is incompatible with ocr mode ``redo``. When both
|
||||
``clean-final`` and the ocr mode ``redo`` is configured, ``clean``
|
||||
is used instead.
|
||||
|
||||
PAPERLESS_OCR_DESKEW=<bool>
|
||||
Tells paperless to correct skewing (slight rotation of input images mainly
|
||||
due to improper scanning)
|
||||
|
||||
Defaults to ``true``, which enables this feature.
|
||||
|
||||
.. note::
|
||||
|
||||
Deskewing is incompatible with ocr mode ``redo``. Deskewing will get
|
||||
disabled automatically if ``redo`` is used as the ocr mode.
|
||||
|
||||
PAPERLESS_OCR_ROTATE_PAGES=<bool>
|
||||
Tells paperless to correct page rotation (90°, 180° and 270° rotation).
|
||||
|
||||
If you notice that paperless is not rotating incorrectly rotated
|
||||
pages (or vice versa), try adjusting the threshold up or down (see below).
|
||||
|
||||
Defaults to ``true``, which enables this feature.
|
||||
|
||||
|
||||
PAPERLESS_OCR_ROTATE_PAGES_THRESHOLD=<num>
|
||||
Adjust the threshold for automatic page rotation by ``PAPERLESS_OCR_ROTATE_PAGES``.
|
||||
This is an arbitrary value reported by tesseract. "15" is a very conservative value,
|
||||
whereas "2" is a very aggressive option and will often result in correctly rotated pages
|
||||
being rotated as well.
|
||||
|
||||
Defaults to "12".
|
||||
|
||||
PAPERLESS_OCR_OUTPUT_TYPE=<type>
|
||||
Specify the the type of PDF documents that paperless should produce.
|
||||
|
||||
* ``pdf``: Modify the PDF document as little as possible.
|
||||
* ``pdfa``: Convert PDF documents into PDF/A-2b documents, which is a
|
||||
subset of the entire PDF specification and meant for storing
|
||||
documents long term.
|
||||
* ``pdfa-1``, ``pdfa-2``, ``pdfa-3`` to specify the exact version of
|
||||
PDF/A you wish to use.
|
||||
|
||||
If not specified, ``pdfa`` is used. Remember that paperless also keeps
|
||||
the original input file as well as the archived version.
|
||||
|
||||
|
||||
PAPERLESS_OCR_PAGES=<num>
|
||||
Tells paperless to use only the specified amount of pages for OCR. Documents
|
||||
with less than the specified amount of pages get OCR'ed completely.
|
||||
|
||||
Specifying 1 here will only use the first page.
|
||||
|
||||
When combined with ``PAPERLESS_OCR_MODE=redo`` or ``PAPERLESS_OCR_MODE=force``,
|
||||
paperless will not modify any text it finds on excluded pages and copy it
|
||||
verbatim.
|
||||
|
||||
Defaults to 0, which disables this feature and always uses all pages.
|
||||
|
||||
PAPERLESS_OCR_IMAGE_DPI=<num>
|
||||
Paperless will OCR any images you put into the system and convert them
|
||||
into PDF documents. This is useful if your scanner produces images.
|
||||
In order to do so, paperless needs to know the DPI of the image.
|
||||
Most images from scanners will have this information embedded and
|
||||
paperless will detect and use that information. In case this fails, it
|
||||
uses this value as a fallback.
|
||||
|
||||
Set this to the DPI your scanner produces images at.
|
||||
|
||||
Default is none, which will automatically calculate image DPI so that
|
||||
the produced PDF documents are A4 sized.
|
||||
|
||||
PAPERLESS_OCR_MAX_IMAGE_PIXELS=<num>
|
||||
Paperless will raise a warning when OCRing images which are over this limit and
|
||||
will not OCR images which are more than twice this limit. Note this does not
|
||||
prevent the document from being consumed, but could result in missing text content.
|
||||
|
||||
If unset, will default to the value determined by
|
||||
`Pillow <https://pillow.readthedocs.io/en/stable/reference/Image.html#PIL.Image.MAX_IMAGE_PIXELS>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
Increasing this limit could cause Paperless to consume additional resources
|
||||
when consuming a file. Be sure you have sufficient system resources.
|
||||
|
||||
.. caution::
|
||||
|
||||
The limit is intended to prevent malicious files from consuming system resources
|
||||
and causing crashes and other errors. Only increase this value if you are certain
|
||||
your documents are not malicious and you need the text which was not OCRed
|
||||
|
||||
PAPERLESS_OCR_USER_ARGS=<json>
|
||||
OCRmyPDF offers many more options. Use this parameter to specify any
|
||||
additional arguments you wish to pass to OCRmyPDF. Since Paperless uses
|
||||
the API of OCRmyPDF, you have to specify these in a format that can be
|
||||
passed to the API. See `the API reference of OCRmyPDF <https://ocrmypdf.readthedocs.io/en/latest/api.html#reference>`_
|
||||
for valid parameters. All command line options are supported, but they
|
||||
use underscores instead of dashes.
|
||||
|
||||
.. caution::
|
||||
|
||||
Paperless has been tested to work with the OCR options provided
|
||||
above. There are many options that are incompatible with each other,
|
||||
so specifying invalid options may prevent paperless from consuming
|
||||
any documents.
|
||||
|
||||
Specify arguments as a JSON dictionary. Keep note of lower case booleans
|
||||
and double quoted parameter names and strings. Examples:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{"deskew": true, "optimize": 3, "unpaper_args": "--pre-rotate 90"}
|
||||
|
||||
.. _configuration-tika:
|
||||
|
||||
Tika settings
|
||||
#############
|
||||
|
||||
Paperless can make use of `Tika <https://tika.apache.org/>`_ and
|
||||
`Gotenberg <https://gotenberg.dev/>`_ for parsing and
|
||||
converting "Office" documents (such as ".doc", ".xlsx" and ".odt"). If you
|
||||
wish to use this, you must provide a Tika server and a Gotenberg server,
|
||||
configure their endpoints, and enable the feature.
|
||||
|
||||
PAPERLESS_TIKA_ENABLED=<bool>
|
||||
Enable (or disable) the Tika parser.
|
||||
|
||||
Defaults to false.
|
||||
|
||||
PAPERLESS_TIKA_ENDPOINT=<url>
|
||||
Set the endpoint URL were Paperless can reach your Tika server.
|
||||
|
||||
Defaults to "http://localhost:9998".
|
||||
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT=<url>
|
||||
Set the endpoint URL were Paperless can reach your Gotenberg server.
|
||||
|
||||
Defaults to "http://localhost:3000".
|
||||
|
||||
If you run paperless on docker, you can add those services to the docker-compose
|
||||
file (see the provided ``docker-compose.sqlite-tika.yml`` file for reference). The changes
|
||||
requires are as follows:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
services:
|
||||
# ...
|
||||
|
||||
webserver:
|
||||
# ...
|
||||
|
||||
environment:
|
||||
# ...
|
||||
|
||||
PAPERLESS_TIKA_ENABLED: 1
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
|
||||
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
|
||||
|
||||
# ...
|
||||
|
||||
gotenberg:
|
||||
image: gotenberg/gotenberg:7.4
|
||||
restart: unless-stopped
|
||||
command:
|
||||
- "gotenberg"
|
||||
- "--chromium-disable-routes=true"
|
||||
|
||||
tika:
|
||||
image: ghcr.io/paperless-ngx/tika:latest
|
||||
restart: unless-stopped
|
||||
|
||||
Add the configuration variables to the environment of the webserver (alternatively
|
||||
put the configuration in the ``docker-compose.env`` file) and add the additional
|
||||
services below the webserver service. Watch out for indentation.
|
||||
|
||||
Make sure to use the correct format `PAPERLESS_TIKA_ENABLED = 1` so python_dotenv can parse the statement correctly.
|
||||
|
||||
Software tweaks
|
||||
###############
|
||||
|
||||
PAPERLESS_TASK_WORKERS=<num>
|
||||
Paperless does multiple things in the background: Maintain the search index,
|
||||
maintain the automatic matching algorithm, check emails, consume documents,
|
||||
etc. This variable specifies how many things it will do in parallel.
|
||||
|
||||
Defaults to 1
|
||||
|
||||
|
||||
PAPERLESS_THREADS_PER_WORKER=<num>
|
||||
Furthermore, paperless uses multiple threads when consuming documents to
|
||||
speed up OCR. This variable specifies how many pages paperless will process
|
||||
in parallel on a single document.
|
||||
|
||||
.. caution::
|
||||
|
||||
Ensure that the product
|
||||
|
||||
PAPERLESS_TASK_WORKERS * PAPERLESS_THREADS_PER_WORKER
|
||||
|
||||
does not exceed your CPU core count or else paperless will be extremely slow.
|
||||
If you want paperless to process many documents in parallel, choose a high
|
||||
worker count. If you want paperless to process very large documents faster,
|
||||
use a higher thread per worker count.
|
||||
|
||||
The default is a balance between the two, according to your CPU core count,
|
||||
with a slight favor towards threads per worker:
|
||||
|
||||
+----------------+---------+---------+
|
||||
| CPU core count | Workers | Threads |
|
||||
+----------------+---------+---------+
|
||||
| 1 | 1 | 1 |
|
||||
+----------------+---------+---------+
|
||||
| 2 | 2 | 1 |
|
||||
+----------------+---------+---------+
|
||||
| 4 | 2 | 2 |
|
||||
+----------------+---------+---------+
|
||||
| 6 | 2 | 3 |
|
||||
+----------------+---------+---------+
|
||||
| 8 | 2 | 4 |
|
||||
+----------------+---------+---------+
|
||||
| 12 | 3 | 4 |
|
||||
+----------------+---------+---------+
|
||||
| 16 | 4 | 4 |
|
||||
+----------------+---------+---------+
|
||||
|
||||
If you only specify PAPERLESS_TASK_WORKERS, paperless will adjust
|
||||
PAPERLESS_THREADS_PER_WORKER automatically.
|
||||
|
||||
|
||||
PAPERLESS_WORKER_TIMEOUT=<num>
|
||||
Machines with few cores or weak ones might not be able to finish OCR on
|
||||
large documents within the default 1800 seconds. So extending this timeout
|
||||
may prove to be useful on weak hardware setups.
|
||||
|
||||
PAPERLESS_WORKER_RETRY=<num>
|
||||
If PAPERLESS_WORKER_TIMEOUT has been configured, the retry time for a task can
|
||||
also be configured. By default, this value will be set to 10s more than the
|
||||
worker timeout. This value should never be set less than the worker timeout.
|
||||
|
||||
PAPERLESS_TIME_ZONE=<timezone>
|
||||
Set the time zone here.
|
||||
See https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-TIME_ZONE
|
||||
for details on how to set it.
|
||||
|
||||
Defaults to UTC.
|
||||
|
||||
|
||||
.. _configuration-polling:
|
||||
|
||||
PAPERLESS_CONSUMER_POLLING=<num>
|
||||
If paperless won't find documents added to your consume folder, it might
|
||||
not be able to automatically detect filesystem changes. In that case,
|
||||
specify a polling interval in seconds here, which will then cause paperless
|
||||
to periodically check your consumption directory for changes. This will also
|
||||
disable listening for file system changes with ``inotify``.
|
||||
|
||||
Defaults to 0, which disables polling and uses filesystem notifications.
|
||||
|
||||
PAPERLESS_CONSUMER_POLLING_RETRY_COUNT=<num>
|
||||
If consumer polling is enabled, sets the number of times paperless will check for a
|
||||
file to remain unmodified.
|
||||
|
||||
Defaults to 5.
|
||||
|
||||
PAPERLESS_CONSUMER_POLLING_DELAY=<num>
|
||||
If consumer polling is enabled, sets the delay in seconds between each check (above) paperless
|
||||
will do while waiting for a file to remain unmodified.
|
||||
|
||||
Defaults to 5.
|
||||
|
||||
.. _configuration-inotify:
|
||||
|
||||
PAPERLESS_CONSUMER_INOTIFY_DELAY=<num>
|
||||
Sets the time in seconds the consumer will wait for additional events
|
||||
from inotify before the consumer will consider a file ready and begin consumption.
|
||||
Certain scanners or network setups may generate multiple events for a single file,
|
||||
leading to multiple consumers working on the same file. Configure this to
|
||||
prevent that.
|
||||
|
||||
Defaults to 0.5 seconds.
|
||||
|
||||
PAPERLESS_CONSUMER_DELETE_DUPLICATES=<bool>
|
||||
When the consumer detects a duplicate document, it will not touch the
|
||||
original document. This default behavior can be changed here.
|
||||
|
||||
Defaults to false.
|
||||
|
||||
|
||||
PAPERLESS_CONSUMER_RECURSIVE=<bool>
|
||||
Enable recursive watching of the consumption directory. Paperless will
|
||||
then pickup files from files in subdirectories within your consumption
|
||||
directory as well.
|
||||
|
||||
Defaults to false.
|
||||
|
||||
|
||||
PAPERLESS_CONSUMER_SUBDIRS_AS_TAGS=<bool>
|
||||
Set the names of subdirectories as tags for consumed files.
|
||||
E.g. <CONSUMPTION_DIR>/foo/bar/file.pdf will add the tags "foo" and "bar" to
|
||||
the consumed file. Paperless will create any tags that don't exist yet.
|
||||
|
||||
This is useful for sorting documents with certain tags such as ``car`` or
|
||||
``todo`` prior to consumption. These folders won't be deleted.
|
||||
|
||||
PAPERLESS_CONSUMER_RECURSIVE must be enabled for this to work.
|
||||
|
||||
Defaults to false.
|
||||
|
||||
PAPERLESS_CONSUMER_ENABLE_BARCODES=<bool>
|
||||
Enables the scanning and page separation based on detected barcodes.
|
||||
This allows for scanning and adding multiple documents per uploaded
|
||||
file, which are separated by one or multiple barcode pages.
|
||||
|
||||
For ease of use, it is suggested to use a standardized separation page,
|
||||
e.g. `here <https://www.alliancegroup.co.uk/patch-codes.htm>`_.
|
||||
|
||||
If no barcodes are detected in the uploaded file, no page separation
|
||||
will happen.
|
||||
|
||||
The original document will be removed and the separated pages will be
|
||||
saved as pdf.
|
||||
|
||||
Defaults to false.
|
||||
|
||||
PAPERLESS_CONSUMER_BARCODE_TIFF_SUPPORT=<bool>
|
||||
Whether TIFF image files should be scanned for barcodes.
|
||||
This will automatically convert any TIFF image(s) to pdfs for later
|
||||
processing.
|
||||
This only has an effect, if PAPERLESS_CONSUMER_ENABLE_BARCODES has been
|
||||
enabled.
|
||||
|
||||
Defaults to false.
|
||||
|
||||
PAPERLESS_CONSUMER_BARCODE_STRING=PATCHT
|
||||
Defines the string to be detected as a separator barcode.
|
||||
If paperless is used with the PATCH-T separator pages, users
|
||||
shouldn't change this.
|
||||
|
||||
Defaults to "PATCHT"
|
||||
|
||||
PAPERLESS_CONVERT_MEMORY_LIMIT=<num>
|
||||
On smaller systems, or even in the case of Very Large Documents, the consumer
|
||||
may explode, complaining about how it's "unable to extend pixel cache". In
|
||||
such cases, try setting this to a reasonably low value, like 32. The
|
||||
default is to use whatever is necessary to do everything without writing to
|
||||
disk, and units are in megabytes.
|
||||
|
||||
For more information on how to use this value, you should search
|
||||
the web for "MAGICK_MEMORY_LIMIT".
|
||||
|
||||
Defaults to 0, which disables the limit.
|
||||
|
||||
PAPERLESS_CONVERT_TMPDIR=<path>
|
||||
Similar to the memory limit, if you've got a small system and your OS mounts
|
||||
/tmp as tmpfs, you should set this to a path that's on a physical disk, like
|
||||
/home/your_user/tmp or something. ImageMagick will use this as scratch space
|
||||
when crunching through very large documents.
|
||||
|
||||
For more information on how to use this value, you should search
|
||||
the web for "MAGICK_TMPDIR".
|
||||
|
||||
Default is none, which disables the temporary directory.
|
||||
|
||||
PAPERLESS_POST_CONSUME_SCRIPT=<filename>
|
||||
After a document is consumed, Paperless can trigger an arbitrary script if
|
||||
you like. This script will be passed a number of arguments for you to work
|
||||
with. For more information, take a look at :ref:`advanced-post_consume_script`.
|
||||
|
||||
The default is blank, which means nothing will be executed.
|
||||
|
||||
PAPERLESS_FILENAME_DATE_ORDER=<format>
|
||||
Paperless will check the document text for document date information.
|
||||
Use this setting to enable checking the document filename for date
|
||||
information. The date order can be set to any option as specified in
|
||||
https://dateparser.readthedocs.io/en/latest/settings.html#date-order.
|
||||
The filename will be checked first, and if nothing is found, the document
|
||||
text will be checked as normal.
|
||||
|
||||
A date in a filename must have some separators (`.`, `-`, `/`, etc)
|
||||
for it to be parsed.
|
||||
|
||||
Defaults to none, which disables this feature.
|
||||
|
||||
PAPERLESS_THUMBNAIL_FONT_NAME=<filename>
|
||||
Paperless creates thumbnails for plain text files by rendering the content
|
||||
of the file on an image and uses a predefined font for that. This
|
||||
font can be changed here.
|
||||
|
||||
Note that this won't have any effect on already generated thumbnails.
|
||||
|
||||
Defaults to ``/usr/share/fonts/liberation/LiberationSerif-Regular.ttf``.
|
||||
|
||||
PAPERLESS_IGNORE_DATES=<string>
|
||||
Paperless parses a documents creation date from filename and file content.
|
||||
You may specify a comma separated list of dates that should be ignored during
|
||||
this process. This is useful for special dates (like date of birth) that appear
|
||||
in documents regularly but are very unlikely to be the documents creation date.
|
||||
|
||||
The date is parsed using the order specified in PAPERLESS_DATE_ORDER
|
||||
|
||||
Defaults to an empty string to not ignore any dates.
|
||||
|
||||
PAPERLESS_DATE_ORDER=<format>
|
||||
Paperless will try to determine the document creation date from its contents.
|
||||
Specify the date format Paperless should expect to see within your documents.
|
||||
|
||||
This option defaults to DMY which translates to day first, month second, and year
|
||||
last order. Characters D, M, or Y can be shuffled to meet the required order.
|
||||
|
||||
PAPERLESS_CONSUMER_IGNORE_PATTERNS=<json>
|
||||
By default, paperless ignores certain files and folders in the consumption
|
||||
directory, such as system files created by the Mac OS.
|
||||
|
||||
This can be adjusted by configuring a custom json array with patterns to exclude.
|
||||
|
||||
Defaults to ``[".DS_STORE/*", "._*", ".stfolder/*", ".stversions/*", ".localized/*", "desktop.ini"]``.
|
||||
|
||||
Binaries
|
||||
########
|
||||
|
||||
There are a few external software packages that Paperless expects to find on
|
||||
your system when it starts up. Unless you've done something creative with
|
||||
their installation, you probably won't need to edit any of these. However,
|
||||
if you've installed these programs somewhere where simply typing the name of
|
||||
the program doesn't automatically execute it (ie. the program isn't in your
|
||||
$PATH), then you'll need to specify the literal path for that program.
|
||||
|
||||
PAPERLESS_CONVERT_BINARY=<path>
|
||||
Defaults to "/usr/bin/convert".
|
||||
|
||||
PAPERLESS_GS_BINARY=<path>
|
||||
Defaults to "/usr/bin/gs".
|
||||
|
||||
|
||||
.. _configuration-docker:
|
||||
|
||||
Docker-specific options
|
||||
#######################
|
||||
|
||||
These options don't have any effect in ``paperless.conf``. These options adjust
|
||||
the behavior of the docker container. Configure these in `docker-compose.env`.
|
||||
|
||||
PAPERLESS_WEBSERVER_WORKERS=<num>
|
||||
The number of worker processes the webserver should spawn. More worker processes
|
||||
usually result in the front end to load data much quicker. However, each worker process
|
||||
also loads the entire application into memory separately, so increasing this value
|
||||
will increase RAM usage.
|
||||
|
||||
Defaults to 1.
|
||||
|
||||
PAPERLESS_PORT=<port>
|
||||
The port number the webserver will listen on inside the container. There are
|
||||
special setups where you may need this to avoid collisions with other
|
||||
services (like using podman with multiple containers in one pod).
|
||||
|
||||
Don't change this when using Docker. To change the port the webserver is
|
||||
reachable outside of the container, instead refer to the "ports" key in
|
||||
``docker-compose.yml``.
|
||||
|
||||
Defaults to 8000.
|
||||
|
||||
USERMAP_UID=<uid>
|
||||
The ID of the paperless user in the container. Set this to your actual user ID on the
|
||||
host system, which you can get by executing
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ id -u
|
||||
|
||||
Paperless will change ownership on its folders to this user, so you need to get this right
|
||||
in order to be able to write to the consumption directory.
|
||||
|
||||
Defaults to 1000.
|
||||
|
||||
USERMAP_GID=<gid>
|
||||
The ID of the paperless Group in the container. Set this to your actual group ID on the
|
||||
host system, which you can get by executing
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ id -g
|
||||
|
||||
Paperless will change ownership on its folders to this group, so you need to get this right
|
||||
in order to be able to write to the consumption directory.
|
||||
|
||||
Defaults to 1000.
|
||||
|
||||
PAPERLESS_OCR_LANGUAGES=<list>
|
||||
Additional OCR languages to install. By default, paperless comes with
|
||||
English, German, Italian, Spanish and French. If your language is not in this list, install
|
||||
additional languages with this configuration option:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
PAPERLESS_OCR_LANGUAGES=tur ces
|
||||
|
||||
To actually use these languages, also set the default OCR language of paperless:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
PAPERLESS_OCR_LANGUAGE=tur
|
||||
|
||||
Defaults to none, which does not install any additional languages.
|
||||
|
||||
|
||||
.. _configuration-update-checking:
|
||||
|
||||
Update Checking
|
||||
###############
|
||||
|
||||
PAPERLESS_ENABLE_UPDATE_CHECK=<bool>
|
||||
Enable (or disable) the automatic check for available updates. This feature is disabled
|
||||
by default but if it is not explicitly set Paperless-ngx will show a message about this.
|
||||
|
||||
If enabled, the feature works by pinging the the Github API for the latest release e.g.
|
||||
https://api.github.com/repos/paperless-ngx/paperless-ngx/releases/latest
|
||||
to determine whether a new version is available.
|
||||
|
||||
Actual updating of the app must still be performed manually.
|
||||
|
||||
Note that for users of thirdy-party containers e.g. linuxserver.io this notification
|
||||
may be 'ahead' of a new release from the third-party maintainers.
|
||||
|
||||
In either case, no tracking data is collected by the app in any way.
|
||||
|
||||
Defaults to none, which disables the feature.
|
255
docs/consumption.rst
Normal file
@@ -0,0 +1,255 @@
|
||||
.. _consumption:
|
||||
|
||||
Consumption
|
||||
###########
|
||||
|
||||
Once you've got Paperless setup, you need to start feeding documents into it.
|
||||
Currently, there are three options: the consumption directory, IMAP (email), and
|
||||
HTTP POST.
|
||||
|
||||
|
||||
.. _consumption-directory:
|
||||
|
||||
The Consumption Directory
|
||||
=========================
|
||||
|
||||
The primary method of getting documents into your database is by putting them in
|
||||
the consumption directory. The ``document_consumer`` script runs in an infinite
|
||||
loop looking for new additions to this directory and when it finds them, it goes
|
||||
about the process of parsing them with the OCR, indexing what it finds, and
|
||||
encrypting the PDF (if ``PAPERLESS_PASSPHRASE`` is set), storing it in the
|
||||
media directory.
|
||||
|
||||
Getting stuff into this directory is up to you. If you're running Paperless
|
||||
on your local computer, you might just want to drag and drop files there, but if
|
||||
you're running this on a server and want your scanner to automatically push
|
||||
files to this directory, you'll need to setup some sort of service to accept the
|
||||
files from the scanner. Typically, you're looking at an FTP server like
|
||||
`Proftpd`_ or `Samba`_.
|
||||
|
||||
.. _Proftpd: http://www.proftpd.org/
|
||||
.. _Samba: http://www.samba.org/
|
||||
|
||||
So where is this consumption directory? It's wherever you define it. Look for
|
||||
the ``CONSUMPTION_DIR`` value in ``settings.py``. Set that to somewhere
|
||||
appropriate for your use and put some documents in there. When you're ready,
|
||||
follow the :ref:`consumer <utilities-consumer>` instructions to get it running.
|
||||
|
||||
|
||||
.. _consumption-directory-hook:
|
||||
|
||||
Hooking into the Consumption Process
|
||||
------------------------------------
|
||||
|
||||
Sometimes you may want to do something arbitrary whenever a document is
|
||||
consumed. Rather than try to predict what you may want to do, Paperless lets
|
||||
you execute scripts of your own choosing just before or after a document is
|
||||
consumed using a couple simple hooks.
|
||||
|
||||
Just write a script, put it somewhere that Paperless can read & execute, and
|
||||
then put the path to that script in ``paperless.conf`` with the variable name
|
||||
of either ``PAPERLESS_PRE_CONSUME_SCRIPT`` or
|
||||
``PAPERLESS_POST_CONSUME_SCRIPT``. The script will be executed before or
|
||||
or after the document is consumed respectively.
|
||||
|
||||
.. important::
|
||||
|
||||
These scripts are executed in a **blocking** process, which means that if
|
||||
a script takes a long time to run, it can significantly slow down your
|
||||
document consumption flow. If you want things to run asynchronously,
|
||||
you'll have to fork the process in your script and exit.
|
||||
|
||||
|
||||
.. _consumption-directory-hook-variables:
|
||||
|
||||
What Can These Scripts Do?
|
||||
..........................
|
||||
|
||||
It's your script, so you're only limited by your imagination and the laws of
|
||||
physics. However, the following values are passed to the scripts in order:
|
||||
|
||||
|
||||
.. _consumption-director-hook-variables-pre:
|
||||
|
||||
Pre-consumption script
|
||||
::::::::::::::::::::::
|
||||
|
||||
* Document file name
|
||||
|
||||
A simple but common example for this would be creating a simple script like
|
||||
this:
|
||||
|
||||
``/usr/local/bin/ocr-pdf``
|
||||
|
||||
.. code:: bash
|
||||
|
||||
#!/usr/bin/env bash
|
||||
pdf2pdfocr.py -i ${1}
|
||||
|
||||
``/etc/paperless.conf``
|
||||
|
||||
.. code:: bash
|
||||
|
||||
...
|
||||
PAPERLESS_PRE_CONSUME_SCRIPT="/usr/local/bin/ocr-pdf"
|
||||
...
|
||||
|
||||
This will pass the path to the document about to be consumed to ``/usr/local/bin/ocr-pdf``,
|
||||
which will in turn call `pdf2pdfocr.py`_ on your document, which will then
|
||||
overwrite the file with an OCR'd version of the file and exit. At which point,
|
||||
the consumption process will begin with the newly modified file.
|
||||
|
||||
.. _pdf2pdfocr.py: https://github.com/LeoFCardoso/pdf2pdfocr
|
||||
|
||||
|
||||
.. _consumption-director-hook-variables-post:
|
||||
|
||||
Post-consumption script
|
||||
:::::::::::::::::::::::
|
||||
|
||||
* Document id
|
||||
* Generated file name
|
||||
* Source path
|
||||
* Thumbnail path
|
||||
* Download URL
|
||||
* Thumbnail URL
|
||||
* Correspondent
|
||||
* Tags
|
||||
|
||||
The script can be in any language you like, but for a simple shell script
|
||||
example, you can take a look at ``post-consumption-example.sh`` in the
|
||||
``scripts`` directory in this project.
|
||||
|
||||
|
||||
.. _consumption-imap:
|
||||
|
||||
IMAP (Email)
|
||||
============
|
||||
|
||||
Another handy way to get documents into your database is to email them to
|
||||
yourself. The typical use-case would be to be out for lunch and want to send a
|
||||
copy of the receipt back to your system at home. Paperless can be taught to
|
||||
pull emails down from an arbitrary account and dump them into the consumption
|
||||
directory where the process :ref:`above <consumption-directory>` will follow the
|
||||
usual pattern on consuming the document.
|
||||
|
||||
Some things you need to know about this feature:
|
||||
|
||||
* It's disabled by default. By setting the values below it will be enabled.
|
||||
* It's been tested in a limited environment, so it may not work for you (please
|
||||
submit a pull request if you can!)
|
||||
* It's designed to **delete mail from the server once consumed**. So don't go
|
||||
pointing this to your personal email account and wonder where all your stuff
|
||||
went.
|
||||
* Currently, only one photo (attachment) per email will work.
|
||||
|
||||
So, with all that in mind, here's what you do to get it running:
|
||||
|
||||
1. Setup a new email account somewhere, or if you're feeling daring, create a
|
||||
folder in an existing email box and note the path to that folder.
|
||||
2. In ``/etc/paperless.conf`` set all of the appropriate values in
|
||||
``PATHS AND FOLDERS`` and ``SECURITY``.
|
||||
If you decided to use a subfolder of an existing account, then make sure you
|
||||
set ``PAPERLESS_CONSUME_MAIL_INBOX`` accordingly here. You also have to set
|
||||
the ``PAPERLESS_EMAIL_SECRET`` to something you can remember 'cause you'll
|
||||
have to include that in every email you send.
|
||||
3. Restart the :ref:`consumer <utilities-consumer>`. The consumer will check
|
||||
the configured email account at startup and from then on every 10 minutes
|
||||
for something new and pulls down whatever it finds.
|
||||
4. Send yourself an email! Note that the subject is treated as the file name,
|
||||
so if you set the subject to ``Correspondent - Title - tag,tag,tag``, you'll
|
||||
get what you expect. Also, you must include the aforementioned secret
|
||||
string in every email so the fetcher knows that it's safe to import.
|
||||
Note that Paperless only allows the email title to consist of safe characters
|
||||
to be imported. These consist of alpha-numeric characters and ``-_ ,.'``.
|
||||
5. After a few minutes, the consumer will poll your mailbox, pull down the
|
||||
message, and place the attachment in the consumption directory with the
|
||||
appropriate name. A few minutes later, the consumer will import it like any
|
||||
other file.
|
||||
|
||||
|
||||
.. _consumption-http:
|
||||
|
||||
HTTP POST
|
||||
=========
|
||||
|
||||
You can also submit a document via HTTP POST, so long as you do so after
|
||||
authenticating. To push your document to Paperless, send an HTTP POST to the
|
||||
server with the following name/value pairs:
|
||||
|
||||
* ``correspondent``: The name of the document's correspondent. Note that there
|
||||
are restrictions on what characters you can use here. Specifically,
|
||||
alphanumeric characters, `-`, `,`, `.`, and `'` are ok, everything else is
|
||||
out. You also can't use the sequence ` - ` (space, dash, space).
|
||||
* ``title``: The title of the document. The rules for characters is the same
|
||||
here as the correspondent.
|
||||
* ``document``: The file you're uploading
|
||||
|
||||
Specify ``enctype="multipart/form-data"``, and then POST your file with::
|
||||
|
||||
Content-Disposition: form-data; name="document"; filename="whatever.pdf"
|
||||
|
||||
An example of this in HTML is a typical form:
|
||||
|
||||
.. code:: html
|
||||
|
||||
<form method="post" enctype="multipart/form-data">
|
||||
<input type="text" name="correspondent" value="My Correspondent" />
|
||||
<input type="text" name="title" value="My Title" />
|
||||
<input type="file" name="document" />
|
||||
<input type="submit" name="go" value="Do the thing" />
|
||||
</form>
|
||||
|
||||
But a potentially more useful way to do this would be in Python. Here we use
|
||||
the requests library to handle basic authentication and to send the POST data
|
||||
to the URL.
|
||||
|
||||
.. code:: python
|
||||
|
||||
import os
|
||||
|
||||
from hashlib import sha256
|
||||
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
|
||||
# You authenticate via BasicAuth or with a session id.
|
||||
# We use BasicAuth here
|
||||
username = "my-username"
|
||||
password = "my-super-secret-password"
|
||||
|
||||
# Where you have Paperless installed and listening
|
||||
url = "http://localhost:8000/push"
|
||||
|
||||
# Document metadata
|
||||
correspondent = "Test Correspondent"
|
||||
title = "Test Title"
|
||||
|
||||
# The local file you want to push
|
||||
path = "/path/to/some/directory/my-document.pdf"
|
||||
|
||||
|
||||
with open(path, "rb") as f:
|
||||
|
||||
response = requests.post(
|
||||
url=url,
|
||||
data={"title": title, "correspondent": correspondent},
|
||||
files={"document": (os.path.basename(path), f, "application/pdf")},
|
||||
auth=HTTPBasicAuth(username, password),
|
||||
allow_redirects=False
|
||||
)
|
||||
|
||||
if response.status_code == 202:
|
||||
|
||||
# Everything worked out ok
|
||||
print("Upload successful")
|
||||
|
||||
else:
|
||||
|
||||
# If you don't get a 202, it's probably because your credentials
|
||||
# are wrong or something. This will give you a rough idea of what
|
||||
# happened.
|
||||
|
||||
print("We got HTTP status code: {}".format(response.status_code))
|
||||
for k, v in response.headers.items():
|
||||
print("{}: {}".format(k, v))
|
141
docs/contributing.rst
Normal file
@@ -0,0 +1,141 @@
|
||||
.. _contributing:
|
||||
|
||||
Contributing to Paperless
|
||||
#########################
|
||||
|
||||
Maybe you've been using Paperless for a while and want to add a feature or two,
|
||||
or maybe you've come across a bug that you have some ideas how to solve. The
|
||||
beauty of Free software is that you can see what's wrong and help to get it
|
||||
fixed for everyone!
|
||||
|
||||
|
||||
How to Get Your Changes Rolled Into Paperless
|
||||
=============================================
|
||||
|
||||
If you've found a bug, but don't know how to fix it, you can always post an
|
||||
issue on `GitHub`_ in the hopes that someone will have the time to fix it for
|
||||
you. If however you're the one with the time, pull requests are always
|
||||
welcome, you just have to make sure that your code conforms to a few standards:
|
||||
|
||||
Pep8
|
||||
----
|
||||
|
||||
It's the standard for all Python development, so it's `very well documented`_.
|
||||
The short version is:
|
||||
|
||||
* Lines should wrap at 79 characters
|
||||
* Use ``snake_case`` for variables, ``CamelCase`` for classes, and ``ALL_CAPS``
|
||||
for constants.
|
||||
* Space out your operators: ``stuff + 7`` instead of ``stuff+7``
|
||||
* Two empty lines between classes, and functions, but 1 empty line between
|
||||
class methods.
|
||||
|
||||
There's more to it than that, but if you follow those, you'll probably be
|
||||
alright. When you submit your pull request, there's a pep8 checker that'll
|
||||
look at your code to see if anything is off. If it finds anything, it'll
|
||||
complain at you until you fix it.
|
||||
|
||||
|
||||
Additional Style Guides
|
||||
-----------------------
|
||||
|
||||
Where pep8 is ambiguous, I've tried to be a little more specific. These rules
|
||||
aren't hard-and-fast, but if you can conform to them, I'll appreciate it and
|
||||
spend less time trying to conform your PR before merging:
|
||||
|
||||
|
||||
Function calls
|
||||
..............
|
||||
|
||||
If you're calling a function and that necessitates more than one line of code,
|
||||
please format it like this:
|
||||
|
||||
.. code:: python
|
||||
|
||||
my_function(
|
||||
argument1,
|
||||
kwarg1="x",
|
||||
kwarg2="y"
|
||||
another_really_long_kwarg="some big value"
|
||||
a_kwarg_calling_another_long_function=another_function(
|
||||
another_arg,
|
||||
another_kwarg="kwarg!"
|
||||
)
|
||||
)
|
||||
|
||||
This is all in the interest of code uniformity rather than anything else. If
|
||||
we stick to a style, everything is understandable in the same way.
|
||||
|
||||
|
||||
Quoting Strings
|
||||
...............
|
||||
|
||||
pep8 is a little too open-minded on this for my liking. Python strings should
|
||||
be quoted with double quotes (``"``) except in cases where the resulting string
|
||||
would require too much escaping of a double quote, in which case, a single
|
||||
quoted, or triple-quoted string will do:
|
||||
|
||||
.. code:: python
|
||||
|
||||
my_string = "This is my string"
|
||||
problematic_string = 'This is a "string" with "quotes" in it'
|
||||
|
||||
In HTML templates, please use double-quotes for tag attributes, and single
|
||||
quotes for arguments passed to Django tempalte tags:
|
||||
|
||||
.. code:: html
|
||||
|
||||
<div class="stuff">
|
||||
<a href="{% url 'some-url-name' pk='w00t' %}">link this</a>
|
||||
</div>
|
||||
|
||||
This is to keep linters happy they look at an HTML file and see an attribute
|
||||
closing the ``"`` before it should have been.
|
||||
|
||||
--
|
||||
|
||||
That's all there is in terms of guidelines, so I hope it's not too daunting.
|
||||
|
||||
|
||||
Indentation & Spacing
|
||||
.....................
|
||||
|
||||
When it comes to indentation:
|
||||
|
||||
* For Python, the rule is: follow pep8 and use 4 spaces.
|
||||
* For Javascript, CSS, and HTML, please use 1 tab.
|
||||
|
||||
Additionally, Django templates making use of block elements like ``{% if %}``,
|
||||
``{% for %}``, and ``{% block %}`` etc. should be indented:
|
||||
|
||||
Good:
|
||||
|
||||
.. code:: html
|
||||
|
||||
{% block stuff %}
|
||||
<h1>This is the stuff</h1>
|
||||
{% endblock %}
|
||||
|
||||
Bad:
|
||||
|
||||
.. code:: html
|
||||
|
||||
{% block stuff %}
|
||||
<h1>This is the stuff</h1>
|
||||
{% endblock %}
|
||||
|
||||
|
||||
The Code of Conduct
|
||||
===================
|
||||
|
||||
Paperless has a `code of conduct`_. It's a lot like the other ones you see out
|
||||
there, with a few small changes, but basically it boils down to:
|
||||
|
||||
> Don't be an ass, or you might get banned.
|
||||
|
||||
I'm proud to say that the CoC has never had to be enforced because everyone has
|
||||
been awesome, friendly, and professional.
|
||||
|
||||
.. _GitHub: https://github.com/danielquinn/paperless/issues
|
||||
.. _very well documented: https://www.python.org/dev/peps/pep-0008/
|
||||
.. _code of conduct: https://github.com/danielquinn/paperless/blob/master/CODE_OF_CONDUCT.md
|
42
docs/customising.rst
Normal file
@@ -0,0 +1,42 @@
|
||||
.. _customising:
|
||||
|
||||
Customising Paperless
|
||||
#####################
|
||||
|
||||
Currently, the Paperless' interface is just the default Django admin, which
|
||||
while powerful, is rather boring. If you'd like to give the site a bit of a
|
||||
face-lift, or if you simply want to adjust the colours, contrast, or font size
|
||||
to make things easier to read, you can do that by adding your own CSS or
|
||||
Javascript quite easily.
|
||||
|
||||
|
||||
.. _customising-overrides:
|
||||
|
||||
Overrides
|
||||
=========
|
||||
|
||||
On every page load, Paperless looks for two files in your media root directory
|
||||
(the directory defined by your ``PAPERLESS_MEDIADIR`` configuration variable or
|
||||
the default, ``<project root>/media/``) for two files:
|
||||
|
||||
* ``overrides.css``
|
||||
* ``overrides.js``
|
||||
|
||||
If it finds either or both of those files, they'll be loaded into the page: the
|
||||
CSS in the ``<head>``, and the Javascript stuffed into the last line of the
|
||||
``<body>``.
|
||||
|
||||
|
||||
.. _customising-overrides-note:
|
||||
|
||||
An important note about customisation
|
||||
-------------------------------------
|
||||
|
||||
Any changes you make to the site with your CSS or Javascript are likely to
|
||||
depend on the structure of the current HTML and/or the existing CSS rules. For
|
||||
the most part it's safe to assume that these bits won't change, but *sometimes
|
||||
they do* as features are added or bugs are fixed.
|
||||
|
||||
If you make a change that you think others would appreciate though, submit it
|
||||
as a pull request and maybe we can find a way to work it into the project by
|
||||
default!
|
@@ -1,431 +1,112 @@
|
||||
.. _extending:
|
||||
|
||||
Paperless-ngx Development
|
||||
#########################
|
||||
|
||||
This section describes the steps you need to take to start development on paperless-ngx.
|
||||
|
||||
Check out the source from github. The repository is organized in the following way:
|
||||
|
||||
* ``main`` always represents the latest release and will only see changes
|
||||
when a new release is made.
|
||||
* ``dev`` contains the code that will be in the next release.
|
||||
* ``feature-X`` contain bigger changes that will be in some release, but not
|
||||
necessarily the next one.
|
||||
|
||||
When making functional changes to paperless, *always* make your changes on the ``dev`` branch.
|
||||
|
||||
Apart from that, the folder structure is as follows:
|
||||
|
||||
* ``docs/`` - Documentation.
|
||||
* ``src-ui/`` - Code of the front end.
|
||||
* ``src/`` - Code of the back end.
|
||||
* ``scripts/`` - Various scripts that help with different parts of development.
|
||||
* ``docker/`` - Files required to build the docker image.
|
||||
|
||||
Contributing to Paperless
|
||||
=========================
|
||||
|
||||
Maybe you've been using Paperless for a while and want to add a feature or two,
|
||||
or maybe you've come across a bug that you have some ideas how to solve. The
|
||||
beauty of open source software is that you can see what's wrong and help to get
|
||||
it fixed for everyone!
|
||||
|
||||
Before contributing please review our `code of conduct`_ and other important
|
||||
information in the `contributing guidelines`_.
|
||||
|
||||
.. _code-formatting-with-pre-commit-hooks:
|
||||
|
||||
Code formatting with pre-commit Hooks
|
||||
=====================================
|
||||
|
||||
To ensure a consistent style and formatting across the project source, the project
|
||||
utilizes a Git `pre-commit` hook to perform some formatting and linting before a
|
||||
commit is allowed. That way, everyone uses the same style and some common issues
|
||||
can be caught early on. See below for installation instructions.
|
||||
|
||||
Once installed, hooks will run when you commit. If the formatting isn't quite right
|
||||
or a linter catches something, the commit will be rejected. You'll need to look at the
|
||||
output and fix the issue. Some hooks, such as the Python formatting tool `black`,
|
||||
will format failing files, so all you need to do is `git add` those files again and
|
||||
retry your commit.
|
||||
|
||||
Initial setup and first start
|
||||
=============================
|
||||
|
||||
After you forked and cloned the code from github you need to perform a first-time setup.
|
||||
To do the setup you need to perform the steps from the following chapters in a certain order:
|
||||
|
||||
1. Install prerequisites + pipenv as mentioned in :ref:`Bare metal route <setup-bare_metal>`
|
||||
2. Copy ``paperless.conf.example`` to ``paperless.conf`` and enable debug mode.
|
||||
3. Install the Angular CLI interface:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ npm install -g @angular/cli
|
||||
|
||||
4. Install pre-commit
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
pre-commit install
|
||||
|
||||
5. Create ``consume`` and ``media`` folders in the cloned root folder.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
mkdir -p consume media
|
||||
|
||||
6. You can now either ...
|
||||
|
||||
* install redis or
|
||||
* use the included scripts/start-services.sh to use docker to fire up a redis instance (and some other services such as tika, gotenberg and a postgresql server) or
|
||||
* spin up a bare redis container
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
docker run -d -p 6379:6379 --restart unless-stopped redis:latest
|
||||
|
||||
7. Install the python dependencies by performing in the src/ directory.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
pipenv install --dev
|
||||
|
||||
* Make sure you're using python 3.9.x or lower. Otherwise you might get issues with building dependencies. You can use `pyenv <https://github.com/pyenv/pyenv>`_ to install a specific python version.
|
||||
|
||||
8. Generate the static UI so you can perform a login to get session that is required for frontend development (this needs to be done one time only). From src-ui directory:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
npm install .
|
||||
./node_modules/.bin/ng build --configuration production
|
||||
|
||||
9. Apply migrations and create a superuser for your dev instance:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
python3 manage.py migrate
|
||||
python3 manage.py createsuperuser
|
||||
|
||||
10. Now spin up the dev backend. Depending on which part of paperless you're developing for, you need to have some or all of them running.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
python3 manage.py runserver & python3 manage.py document_consumer & python3 manage.py qcluster
|
||||
|
||||
11. Login with the superuser credentials provided in step 8 at ``http://localhost:8000`` to create a session that enables you to use the backend.
|
||||
|
||||
Backend development environment is now ready, to start Frontend development go to ``/src-ui`` and run ``ng serve``. From there you can use ``http://localhost:4200`` for a preview.
|
||||
|
||||
Back end development
|
||||
====================
|
||||
|
||||
The backend is a django application. PyCharm works well for development, but you can use whatever
|
||||
you want.
|
||||
|
||||
Configure the IDE to use the src/ folder as the base source folder. Configure the following
|
||||
launch configurations in your IDE:
|
||||
|
||||
* python3 manage.py runserver
|
||||
* python3 manage.py qcluster
|
||||
* python3 manage.py document_consumer
|
||||
|
||||
To start them all:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
python3 manage.py runserver & python3 manage.py document_consumer & python3 manage.py qcluster
|
||||
|
||||
Testing and code style:
|
||||
|
||||
* Run ``pytest`` in the src/ directory to execute all tests. This also generates a HTML coverage
|
||||
report. When runnings test, paperless.conf is loaded as well. However: the tests rely on the default
|
||||
configuration. This is not ideal. But for now, make sure no settings except for DEBUG are overridden when testing.
|
||||
* Coding style is enforced by the Git pre-commit hooks. These will ensure your code is formatted and do some
|
||||
linting when you do a `git commit`.
|
||||
* You can also run ``black`` manually to format your code
|
||||
|
||||
.. note::
|
||||
|
||||
The line length rule E501 is generally useful for getting multiple source files
|
||||
next to each other on the screen. However, in some cases, its just not possible
|
||||
to make some lines fit, especially complicated IF cases. Append ``# NOQA: E501``
|
||||
to disable this check for certain lines.
|
||||
|
||||
Front end development
|
||||
=====================
|
||||
|
||||
The front end is built using Angular. In order to get started, you need ``npm``.
|
||||
Install the Angular CLI interface with
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ npm install -g @angular/cli
|
||||
|
||||
and make sure that it's on your path. Next, in the src-ui/ directory, install the
|
||||
required dependencies of the project.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ npm install
|
||||
|
||||
You can launch a development server by running
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ ng serve
|
||||
|
||||
This will automatically update whenever you save. However, in-place compilation might fail
|
||||
on syntax errors, in which case you need to restart it.
|
||||
|
||||
By default, the development server is available on ``http://localhost:4200/`` and is configured
|
||||
to access the API at ``http://localhost:8000/api/``, which is the default of the backend.
|
||||
If you enabled DEBUG on the back end, several security overrides for allowed hosts, CORS and
|
||||
X-Frame-Options are in place so that the front end behaves exactly as in production. This also
|
||||
relies on you being logged into the back end. Without a valid session, The front end will simply
|
||||
not work.
|
||||
|
||||
Testing and code style:
|
||||
|
||||
* The frontend code (.ts, .html, .scss) use ``prettier`` for code formatting via the Git
|
||||
``pre-commit`` hooks which run automatically on commit. See
|
||||
:ref:`above <code-formatting-with-pre-commit-hooks>` for installation. You can also run this
|
||||
via cli with a command such as
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ git ls-files -- '*.ts' | xargs pre-commit run prettier --files
|
||||
|
||||
* Frontend testing uses jest and cypress. There is currently a need for significantly more
|
||||
frontend tests. Unit tests and e2e tests, respectively, can be run non-interactively with:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ ng test
|
||||
$ npm run e2e:ci
|
||||
|
||||
Cypress also includes a UI which can be run from within the ``src-ui`` directory with
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ ./node_modules/.bin/cypress open
|
||||
|
||||
In order to build the front end and serve it as part of django, execute
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ ng build --prod
|
||||
|
||||
This will build the front end and put it in a location from which the Django server will serve
|
||||
it as static content. This way, you can verify that authentication is working.
|
||||
|
||||
|
||||
Localization
|
||||
============
|
||||
|
||||
Paperless is available in many different languages. Since paperless consists both of a django
|
||||
application and an Angular front end, both these parts have to be translated separately.
|
||||
|
||||
Front end localization
|
||||
----------------------
|
||||
|
||||
* The Angular front end does localization according to the `Angular documentation <https://angular.io/guide/i18n>`_.
|
||||
* The source language of the project is "en_US".
|
||||
* The source strings end up in the file "src-ui/messages.xlf".
|
||||
* The translated strings need to be placed in the "src-ui/src/locale/" folder.
|
||||
* In order to extract added or changed strings from the source files, call ``ng xi18n --ivy``.
|
||||
|
||||
Adding new languages requires adding the translated files in the "src-ui/src/locale/" folder and adjusting a couple files.
|
||||
|
||||
1. Adjust "src-ui/angular.json":
|
||||
|
||||
.. code:: json
|
||||
|
||||
"i18n": {
|
||||
"sourceLocale": "en-US",
|
||||
"locales": {
|
||||
"de": "src/locale/messages.de.xlf",
|
||||
"nl-NL": "src/locale/messages.nl_NL.xlf",
|
||||
"fr": "src/locale/messages.fr.xlf",
|
||||
"en-GB": "src/locale/messages.en_GB.xlf",
|
||||
"pt-BR": "src/locale/messages.pt_BR.xlf",
|
||||
"language-code": "language-file"
|
||||
}
|
||||
}
|
||||
|
||||
2. Add the language to the available options in "src-ui/src/app/services/settings.service.ts":
|
||||
|
||||
.. code:: typescript
|
||||
|
||||
getLanguageOptions(): LanguageOption[] {
|
||||
return [
|
||||
{code: "en-us", name: $localize`English (US)`, englishName: "English (US)", dateInputFormat: "mm/dd/yyyy"},
|
||||
{code: "en-gb", name: $localize`English (GB)`, englishName: "English (GB)", dateInputFormat: "dd/mm/yyyy"},
|
||||
{code: "de", name: $localize`German`, englishName: "German", dateInputFormat: "dd.mm.yyyy"},
|
||||
{code: "nl", name: $localize`Dutch`, englishName: "Dutch", dateInputFormat: "dd-mm-yyyy"},
|
||||
{code: "fr", name: $localize`French`, englishName: "French", dateInputFormat: "dd/mm/yyyy"},
|
||||
{code: "pt-br", name: $localize`Portuguese (Brazil)`, englishName: "Portuguese (Brazil)", dateInputFormat: "dd/mm/yyyy"}
|
||||
// Add your new language here
|
||||
]
|
||||
}
|
||||
|
||||
``dateInputFormat`` is a special string that defines the behavior of the date input fields and absolutely needs to contain "dd", "mm" and "yyyy".
|
||||
|
||||
3. Import and register the Angular data for this locale in "src-ui/src/app/app.module.ts":
|
||||
|
||||
.. code:: typescript
|
||||
|
||||
import localeDe from '@angular/common/locales/de';
|
||||
registerLocaleData(localeDe)
|
||||
|
||||
Back end localization
|
||||
---------------------
|
||||
|
||||
A majority of the strings that appear in the back end appear only when the admin is used. However,
|
||||
some of these are still shown on the front end (such as error messages).
|
||||
|
||||
* The django application does localization according to the `django documentation <https://docs.djangoproject.com/en/3.1/topics/i18n/translation/>`_.
|
||||
* The source language of the project is "en_US".
|
||||
* Localization files end up in the folder "src/locale/".
|
||||
* In order to extract strings from the application, call ``python3 manage.py makemessages -l en_US``. This is important after making changes to translatable strings.
|
||||
* The message files need to be compiled for them to show up in the application. Call ``python3 manage.py compilemessages`` to do this. The generated files don't get
|
||||
committed into git, since these are derived artifacts. The build pipeline takes care of executing this command.
|
||||
|
||||
Adding new languages requires adding the translated files in the "src/locale/" folder and adjusting the file "src/paperless/settings.py" to include the new language:
|
||||
|
||||
.. code:: python
|
||||
|
||||
LANGUAGES = [
|
||||
("en-us", _("English (US)")),
|
||||
("en-gb", _("English (GB)")),
|
||||
("de", _("German")),
|
||||
("nl-nl", _("Dutch")),
|
||||
("fr", _("French")),
|
||||
("pt-br", _("Portuguese (Brazil)")),
|
||||
# Add language here.
|
||||
]
|
||||
|
||||
|
||||
Building the documentation
|
||||
==========================
|
||||
|
||||
The documentation is built using sphinx. I've configured ReadTheDocs to automatically build
|
||||
the documentation when changes are pushed. If you want to build the documentation locally,
|
||||
this is how you do it:
|
||||
|
||||
1. Install python dependencies.
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ cd /path/to/paperless
|
||||
$ pipenv install --dev
|
||||
|
||||
2. Build the documentation
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
$ cd /path/to/paperless/docs
|
||||
$ pipenv run make clean html
|
||||
|
||||
This will build the HTML documentation, and put the resulting files in the ``_build/html``
|
||||
directory.
|
||||
|
||||
Building the Docker image
|
||||
=========================
|
||||
|
||||
The docker image is primarily built by the GitHub actions workflow, but it can be
|
||||
faster when developing to build and tag an image locally.
|
||||
|
||||
To provide the build arguments automatically, build the image using the helper
|
||||
script ``build-docker-image.sh``.
|
||||
|
||||
Building the docker image from source:
|
||||
|
||||
.. code:: shell-session
|
||||
|
||||
./build-docker-image.sh Dockerfile -t <your-tag>
|
||||
|
||||
Extending Paperless
|
||||
===================
|
||||
|
||||
Paperless does not have any fancy plugin systems and will probably never have. However,
|
||||
some parts of the application have been designed to allow easy integration of additional
|
||||
features without any modification to the base code.
|
||||
For the most part, Paperless is monolithic, so extending it is often best
|
||||
managed by way of modifying the code directly and issuing a pull request on
|
||||
`GitHub`_. However, over time the project has been evolving to be a little
|
||||
more "pluggable" so that users can write their own stuff that talks to it.
|
||||
|
||||
Making custom parsers
|
||||
---------------------
|
||||
.. _GitHub: https://github.com/danielquinn/paperless
|
||||
|
||||
Paperless uses parsers to add documents to paperless. A parser is responsible for:
|
||||
|
||||
* Retrieve the content from the original
|
||||
* Create a thumbnail
|
||||
* Optional: Retrieve a created date from the original
|
||||
* Optional: Create an archived document from the original
|
||||
.. _extending-parsers:
|
||||
|
||||
Custom parsers can be added to paperless to support more file types. In order to do that,
|
||||
you need to write the parser itself and announce its existence to paperless.
|
||||
Parsers
|
||||
-------
|
||||
|
||||
The parser itself must extend ``documents.parsers.DocumentParser`` and must implement the
|
||||
methods ``parse`` and ``get_thumbnail``. You can provide your own implementation to
|
||||
``get_date`` if you don't want to rely on paperless' default date guessing mechanisms.
|
||||
You can leverage Paperless' consumption model to have it consume files *other*
|
||||
than ones handled by default like ``.pdf``, ``.jpg``, and ``.tiff``. To do so,
|
||||
you simply follow Django's convention of creating a new app, with a few key
|
||||
requirements.
|
||||
|
||||
|
||||
.. _extending-parsers-parserspy:
|
||||
|
||||
parsers.py
|
||||
..........
|
||||
|
||||
In this file, you create a class that extends
|
||||
``documents.parsers.DocumentParser`` and go about implementing the three
|
||||
required methods:
|
||||
|
||||
* ``get_thumbnail()``: Returns the path to a file we can use as a thumbnail for
|
||||
this document.
|
||||
* ``get_text()``: Returns the text from the document and only the text.
|
||||
* ``get_date()``: If possible, this returns the date of the document, otherwise
|
||||
it should return ``None``.
|
||||
|
||||
|
||||
.. _extending-parsers-signalspy:
|
||||
|
||||
signals.py
|
||||
..........
|
||||
|
||||
At consumption time, Paperless emits a ``document_consumer_declaration``
|
||||
signal which your module has to react to in order to let the consumer know
|
||||
whether or not it's capable of handling a particular file. Think of it like
|
||||
this:
|
||||
|
||||
1. Consumer finds a file in the consumption directory.
|
||||
2. It asks all the available parsers: *"Hey, can you handle this file?"*
|
||||
3. Each parser responds with either ``None`` meaning they can't handle the
|
||||
file, or a dictionary in the following format:
|
||||
|
||||
.. code:: python
|
||||
|
||||
class MyCustomParser(DocumentParser):
|
||||
{
|
||||
"parser": <the class name>,
|
||||
"weight": <an integer>
|
||||
}
|
||||
|
||||
def parse(self, document_path, mime_type):
|
||||
# This method does not return anything. Rather, you should assign
|
||||
# whatever you got from the document to the following fields:
|
||||
The consumer compares the ``weight`` values from all respondents and uses the
|
||||
class with the highest value to consume the document. The default parser,
|
||||
``RasterisedDocumentParser`` has a weight of ``0``.
|
||||
|
||||
# The content of the document.
|
||||
self.text = "content"
|
||||
|
||||
# Optional: path to a PDF document that you created from the original.
|
||||
self.archive_path = os.path.join(self.tempdir, "archived.pdf")
|
||||
.. _extending-parsers-appspy:
|
||||
|
||||
# Optional: "created" date of the document.
|
||||
self.date = get_created_from_metadata(document_path)
|
||||
apps.py
|
||||
.......
|
||||
|
||||
def get_thumbnail(self, document_path, mime_type):
|
||||
# This should return the path to a thumbnail you created for this
|
||||
# document.
|
||||
return os.path.join(self.tempdir, "thumb.png")
|
||||
This is a standard Django file, but you'll need to add some code to it to
|
||||
connect your parser to the ``document_consumer_declaration`` signal.
|
||||
|
||||
If you encounter any issues during parsing, raise a ``documents.parsers.ParseError``.
|
||||
|
||||
The ``self.tempdir`` directory is a temporary directory that is guaranteed to be empty
|
||||
and removed after consumption finished. You can use that directory to store any
|
||||
intermediate files and also use it to store the thumbnail / archived document.
|
||||
.. _extending-parsers-finally:
|
||||
|
||||
After that, you need to announce your parser to paperless. You need to connect a
|
||||
handler to the ``document_consumer_declaration`` signal. Have a look in the file
|
||||
``src/paperless_tesseract/apps.py`` on how that's done. The handler is a method
|
||||
that returns information about your parser:
|
||||
Finally
|
||||
.......
|
||||
|
||||
The last step is to update ``settings.py`` to include your new module.
|
||||
Eventually, this will be dynamic, but at the moment, you have to edit the
|
||||
``INSTALLED_APPS`` section manually. Simply add the path to your AppConfig to
|
||||
the list like this:
|
||||
|
||||
.. code:: python
|
||||
|
||||
def myparser_consumer_declaration(sender, **kwargs):
|
||||
return {
|
||||
"parser": MyCustomParser,
|
||||
"weight": 0,
|
||||
"mime_types": {
|
||||
"application/pdf": ".pdf",
|
||||
"image/jpeg": ".jpg",
|
||||
}
|
||||
}
|
||||
INSTALLED_APPS = [
|
||||
...
|
||||
"my_module.apps.MyModuleConfig",
|
||||
...
|
||||
]
|
||||
|
||||
* ``parser`` is a reference to a class that extends ``DocumentParser``.
|
||||
Order doesn't matter, but generally it's a good idea to place your module lower
|
||||
in the list so that you don't end up accidentally overriding project defaults
|
||||
somewhere.
|
||||
|
||||
* ``weight`` is used whenever two or more parsers are able to parse a file: The parser with
|
||||
the higher weight wins. This can be used to override the parsers provided by
|
||||
paperless.
|
||||
|
||||
* ``mime_types`` is a dictionary. The keys are the mime types your parser supports and the value
|
||||
is the default file extension that paperless should use when storing files and serving them for
|
||||
download. We could guess that from the file extensions, but some mime types have many extensions
|
||||
associated with them and the python methods responsible for guessing the extension do not always
|
||||
return the same value.
|
||||
.. _extending-parsers-example:
|
||||
|
||||
.. _code of conduct: https://github.com/paperless-ngx/paperless-ngx/blob/main/CODE_OF_CONDUCT.md
|
||||
.. _contributing guidelines: https://github.com/paperless-ngx/paperless-ngx/blob/main/CONTRIBUTING.md
|
||||
An Example
|
||||
..........
|
||||
|
||||
The core Paperless functionality is based on this design, so if you want to see
|
||||
what a parser module should look like, have a look at `parsers.py`_,
|
||||
`signals.py`_, and `apps.py`_ in the `paperless_tesseract`_ module.
|
||||
|
||||
.. _parsers.py: https://github.com/danielquinn/paperless/blob/master/src/paperless_tesseract/parsers.py
|
||||
.. _signals.py: https://github.com/danielquinn/paperless/blob/master/src/paperless_tesseract/signals.py
|
||||
.. _apps.py: https://github.com/danielquinn/paperless/blob/master/src/paperless_tesseract/apps.py
|
||||
.. _paperless_tesseract: https://github.com/danielquinn/paperless/blob/master/src/paperless_tesseract/
|
||||
|
117
docs/faq.rst
@@ -1,117 +0,0 @@
|
||||
|
||||
**************************
|
||||
Frequently asked questions
|
||||
**************************
|
||||
|
||||
**Q:** *What's the general plan for Paperless-ngx?*
|
||||
|
||||
**A:** While Paperless-ngx is already considered largely "feature-complete" it is a community-driven
|
||||
project and development will be guided in this way. New features can be submitted via
|
||||
GitHub discussions and "up-voted" by the community but this is not a guarantee the feature
|
||||
will be implemented. This project will always be open to collaboration in the form of PRs,
|
||||
ideas etc.
|
||||
|
||||
**Q:** *I'm using docker. Where are my documents?*
|
||||
|
||||
**A:** Your documents are stored inside the docker volume ``paperless_media``.
|
||||
Docker manages this volume automatically for you. It is a persistent storage
|
||||
and will persist as long as you don't explicitly delete it. The actual location
|
||||
depends on your host operating system. On Linux, chances are high that this location
|
||||
is
|
||||
|
||||
.. code::
|
||||
|
||||
/var/lib/docker/volumes/paperless_media/_data
|
||||
|
||||
.. caution::
|
||||
|
||||
Do not mess with this folder. Don't change permissions and don't move
|
||||
files around manually. This folder is meant to be entirely managed by docker
|
||||
and paperless.
|
||||
|
||||
**Q:** *Let's say I want to switch tools in a year. Can I easily move to other systems?*
|
||||
|
||||
**A:** Your documents are stored as plain files inside the media folder. You can always drag those files
|
||||
out of that folder to use them elsewhere. Here are a couple notes about that.
|
||||
|
||||
* Paperless-ngx never modifies your original documents. It keeps checksums of all documents and uses a
|
||||
scheduled sanity checker to check that they remain the same.
|
||||
* By default, paperless uses the internal ID of each document as its filename. This might not be very
|
||||
convenient for export. However, you can adjust the way files are stored in paperless by
|
||||
:ref:`configuring the filename format <advanced-file_name_handling>`.
|
||||
* :ref:`The exporter <utilities-exporter>` is another easy way to get your files out of paperless with reasonable file names.
|
||||
|
||||
**Q:** *What file types does paperless-ngx support?*
|
||||
|
||||
**A:** Currently, the following files are supported:
|
||||
|
||||
* PDF documents, PNG images, JPEG images, TIFF images and GIF images are processed with OCR and converted into PDF documents.
|
||||
* Plain text documents are supported as well and are added verbatim
|
||||
to paperless.
|
||||
* With the optional Tika integration enabled (see :ref:`Configuration <configuration-tika>`), Paperless also supports various
|
||||
Office documents (.docx, .doc, odt, .ppt, .pptx, .odp, .xls, .xlsx, .ods).
|
||||
|
||||
Paperless-ngx determines the type of a file by inspecting its content. The
|
||||
file extensions do not matter.
|
||||
|
||||
**Q:** *Will paperless-ngx run on Raspberry Pi?*
|
||||
|
||||
**A:** The short answer is yes. I've tested it on a Raspberry Pi 3 B.
|
||||
The long answer is that certain parts of
|
||||
Paperless will run very slow, such as the OCR. On Raspberry Pi,
|
||||
try to OCR documents before feeding them into paperless so that paperless can
|
||||
reuse the text. The web interface is a lot snappier, since it runs
|
||||
in your browser and paperless has to do much less work to serve the data.
|
||||
|
||||
.. note::
|
||||
|
||||
You can adjust some of the settings so that paperless uses less processing
|
||||
power. See :ref:`setup-less_powerful_devices` for details.
|
||||
|
||||
|
||||
**Q:** *How do I install paperless-ngx on Raspberry Pi?*
|
||||
|
||||
**A:** Docker images are available for arm and arm64 hardware, so just follow
|
||||
the docker-compose instructions. Apart from more required disk space compared to
|
||||
a bare metal installation, docker comes with close to zero overhead, even on
|
||||
Raspberry Pi.
|
||||
|
||||
If you decide to got with the bare metal route, be aware that some of the
|
||||
python requirements do not have precompiled packages for ARM / ARM64. Installation
|
||||
of these will require additional development libraries and compilation will take
|
||||
a long time.
|
||||
|
||||
**Q:** *How do I run this on Unraid?*
|
||||
|
||||
**A:** Paperless-ngx is available as `community app <https://unraid.net/community/apps?q=paperless-ngx>`_
|
||||
in Unraid. `Uli Fahrer <https://github.com/Tooa>`_ created a container template for that.
|
||||
|
||||
**Q:** *How do I run this on my toaster?*
|
||||
|
||||
**A:** I honestly don't know! As for all other devices that might be able
|
||||
to run paperless, you're a bit on your own. If you can't run the docker image,
|
||||
the documentation has instructions for bare metal installs. I'm running
|
||||
paperless on an i3 processor from 2015 or so. This is also what I use to test
|
||||
new releases with. Apart from that, I also have a Raspberry Pi, which I
|
||||
occasionally build the image on and see if it works.
|
||||
|
||||
**Q:** *How do I proxy this with NGINX?*
|
||||
|
||||
**A:** See :ref:`here <setup-nginx>`.
|
||||
|
||||
.. _faq-mod_wsgi:
|
||||
|
||||
**Q:** *How do I get WebSocket support with Apache mod_wsgi*?
|
||||
|
||||
**A:** ``mod_wsgi`` by itself does not support ASGI. Paperless will continue
|
||||
to work with WSGI, but certain features such as status notifications about
|
||||
document consumption won't be available.
|
||||
|
||||
If you want to continue using ``mod_wsgi``, you will have to run an ASGI-enabled
|
||||
web server as well that processes WebSocket connections, and configure Apache to
|
||||
redirect WebSocket connections to this server. Multiple options for ASGI servers
|
||||
exist:
|
||||
|
||||
* ``gunicorn`` with ``uvicorn`` as the worker implementation (the default of paperless)
|
||||
* ``daphne`` as a standalone server, which is the reference implementation for ASGI.
|
||||
* ``uvicorn`` as a standalone server
|