Compare commits

..

3 Commits

Author SHA1 Message Date
Trenton H
1f461a7ee4 Enables the fixture again 2025-09-10 09:23:35 -07:00
Trenton H
a2ce0af79f Get initial durations of all tests 2025-09-10 09:10:07 -07:00
Trenton H
ba40626838 Experiment with disabling whitenoise middleware 2025-09-10 08:52:02 -07:00
56 changed files with 654 additions and 1920 deletions

View File

@@ -506,7 +506,6 @@ for the possible codes and their meanings.
The `localize_date` filter formats a date or datetime object into a localized string using Babel internationalization. The `localize_date` filter formats a date or datetime object into a localized string using Babel internationalization.
This takes into account the provided locale for translation. Since this must be used on a date or datetime object, This takes into account the provided locale for translation. Since this must be used on a date or datetime object,
you must access the field directly, i.e. `document.created`. you must access the field directly, i.e. `document.created`.
An ISO string can also be provided to control the output format.
###### Syntax ###### Syntax
@@ -517,7 +516,7 @@ An ISO string can also be provided to control the output format.
###### Parameters ###### Parameters
- `value` (date | datetime | str): Date, datetime object or ISO string to format (datetime should be timezone-aware) - `value` (date | datetime): Date or datetime object to format (datetime should be timezone-aware)
- `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern - `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern
- `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE') - `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE')

View File

@@ -1800,23 +1800,3 @@ password. All of these options come from their similarly-named [Django settings]
#### [`PAPERLESS_EMAIL_USE_SSL=<bool>`](#PAPERLESS_EMAIL_USE_SSL) {#PAPERLESS_EMAIL_USE_SSL} #### [`PAPERLESS_EMAIL_USE_SSL=<bool>`](#PAPERLESS_EMAIL_USE_SSL) {#PAPERLESS_EMAIL_USE_SSL}
: Defaults to false. : Defaults to false.
## Remote OCR
#### [`PAPERLESS_REMOTE_OCR_ENGINE=<str>`](#PAPERLESS_REMOTE_OCR_ENGINE) {#PAPERLESS_REMOTE_OCR_ENGINE}
: The remote OCR engine to use. Currently only Azure AI is supported as "azureai".
Defaults to None, which disables remote OCR.
#### [`PAPERLESS_REMOTE_OCR_API_KEY=<str>`](#PAPERLESS_REMOTE_OCR_API_KEY) {#PAPERLESS_REMOTE_OCR_API_KEY}
: The API key to use for the remote OCR engine.
Defaults to None.
#### [`PAPERLESS_REMOTE_OCR_ENDPOINT=<str>`](#PAPERLESS_REMOTE_OCR_ENDPOINT) {#PAPERLESS_REMOTE_OCR_ENDPOINT}
: The endpoint to use for the remote OCR engine. This is required for Azure AI.
Defaults to None.

View File

@@ -25,10 +25,9 @@ physical documents into a searchable online archive so you can keep, well, _less
## Features ## Features
- **Organize and index** your scanned documents with tags, correspondents, types, and more. - **Organize and index** your scanned documents with tags, correspondents, types, and more.
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so. - _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way.
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images. - Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
- Utilizes the open-source Tesseract engine to recognize more than 100 languages. - Utilizes the open-source Tesseract engine to recognize more than 100 languages.
- _New!_ Supports remote OCR with Azure AI (opt-in).
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals. - Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
- Uses machine-learning to automatically add tags, correspondents and document types to your documents. - Uses machine-learning to automatically add tags, correspondents and document types to your documents.
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more. - Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more.

View File

@@ -408,7 +408,7 @@ Currently, there are three events that correspond to workflow trigger 'types':
but the document content has been extracted and metadata such as document type, tags, etc. have been set, so these can now but the document content has been extracted and metadata such as document type, tags, etc. have been set, so these can now
be used for filtering. be used for filtering.
3. **Document Updated**: when a document is updated. Similar to 'added' events, triggers can include filtering by content matching, 3. **Document Updated**: when a document is updated. Similar to 'added' events, triggers can include filtering by content matching,
tags, doc type, correspondent or storage path. tags, doc type, or correspondent.
4. **Scheduled**: a scheduled trigger that can be used to run workflows at a specific time. The date used can be either the document 4. **Scheduled**: a scheduled trigger that can be used to run workflows at a specific time. The date used can be either the document
added, created, updated date or you can specify a (date) custom field. You can also specify a day offset from the date (positive added, created, updated date or you can specify a (date) custom field. You can also specify a day offset from the date (positive
offsets will trigger after the date, negative offsets will trigger before). offsets will trigger after the date, negative offsets will trigger before).
@@ -452,11 +452,10 @@ Workflows allow you to filter by:
- File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for - File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for
example, automatically assigning documents to different owners based on the upload directory. example, automatically assigning documents to different owners based on the upload directory.
- Mail rule. Choosing this option will force 'mail fetch' to be the workflow source. - Mail rule. Choosing this option will force 'mail fetch' to be the workflow source.
- Content matching (`Added`, `Updated` and `Scheduled` triggers only). Filter document content using the matching settings. - Content matching (`Added` and `Updated` triggers only). Filter document content using the matching settings.
- Tags (`Added`, `Updated` and `Scheduled` triggers only). Filter for documents with any of the specified tags - Tags (`Added` and `Updated` triggers only). Filter for documents with any of the specified tags
- Document type (`Added`, `Updated` and `Scheduled` triggers only). Filter documents with this doc type - Document type (`Added` and `Updated` triggers only). Filter documents with this doc type
- Correspondent (`Added`, `Updated` and `Scheduled` triggers only). Filter documents with this correspondent - Correspondent (`Added` and `Updated` triggers only). Filter documents with this correspondent
- Storage path (`Added`, `Updated` and `Scheduled` triggers only). Filter documents with this storage path
### Workflow Actions ### Workflow Actions
@@ -506,52 +505,35 @@ you may want to adjust these settings to prevent abuse.
#### Workflow placeholders #### Workflow placeholders
Titles can be assigned by workflows using [Jinja templates](https://jinja.palletsprojects.com/en/3.1.x/templates/). Some workflow text can include placeholders but the available options differ depending on the type of
This allows for complex logic to be used to generate the title, including [logical structures](https://jinja.palletsprojects.com/en/3.1.x/templates/#list-of-control-structures) workflow trigger. This is because at the time of consumption (when the text is to be set), no automatic tags etc. have been
and [filters](https://jinja.palletsprojects.com/en/3.1.x/templates/#id11). applied. You can use the following placeholders with any trigger type:
The template is provided as a string.
Using Jinja2 Templates is also useful for [Date localization](advanced_usage.md#Date-Localization) in the title. - `{correspondent}`: assigned correspondent name
- `{document_type}`: assigned document type name
The available inputs differ depending on the type of workflow trigger. - `{owner_username}`: assigned owner username
This is because at the time of consumption (when the text is to be set), no automatic tags etc. have been - `{added}`: added datetime
applied. You can use the following placeholders in the template with any trigger type: - `{added_year}`: added year
- `{added_year_short}`: added year
- `{{correspondent}}`: assigned correspondent name - `{added_month}`: added month
- `{{document_type}}`: assigned document type name - `{added_month_name}`: added month name
- `{{owner_username}}`: assigned owner username - `{added_month_name_short}`: added month short name
- `{{added}}`: added datetime - `{added_day}`: added day
- `{{added_year}}`: added year - `{added_time}`: added time in HH:MM format
- `{{added_year_short}}`: added year - `{original_filename}`: original file name without extension
- `{{added_month}}`: added month - `{filename}`: current file name without extension
- `{{added_month_name}}`: added month name
- `{{added_month_name_short}}`: added month short name
- `{{added_day}}`: added day
- `{{added_time}}`: added time in HH:MM format
- `{{original_filename}}`: original file name without extension
- `{{filename}}`: current file name without extension
The following placeholders are only available for "added" or "updated" triggers The following placeholders are only available for "added" or "updated" triggers
- `{{created}}`: created datetime - `{created}`: created datetime
- `{{created_year}}`: created year - `{created_year}`: created year
- `{{created_year_short}}`: created year - `{created_year_short}`: created year
- `{{created_month}}`: created month - `{created_month}`: created month
- `{{created_month_name}}`: created month name - `{created_month_name}`: created month name
- `{created_month_name_short}}`: created month short name - `{created_month_name_short}`: created month short name
- `{{created_day}}`: created day - `{created_day}`: created day
- `{{created_time}}`: created time in HH:MM format - `{created_time}`: created time in HH:MM format
- `{{doc_url}}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set. - `{doc_url}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set.
##### Examples
```jinja2
{{ created | localize_date('MMMM', 'en_US') }}
<!-- Output: "January" -->
{{ added | localize_date('MMMM', 'de_DE') }}
<!-- Output: "Juni" --> # codespell:ignore
```
### Workflow permissions ### Workflow permissions
@@ -868,21 +850,6 @@ how regularly you intend to scan documents and use paperless.
performed the task associated with the document, move it to the performed the task associated with the document, move it to the
inbox. inbox.
## Remote OCR
!!! important
This feature is disabled by default and will always remain strictly "opt-in".
Paperless-ngx supports performing OCR on documents using remote services. At the moment, this is limited to
[Microsoft's Azure "Document Intelligence" service](https://azure.microsoft.com/en-us/products/ai-services/ai-document-intelligence).
This is of course a paid service (with a free tier) which requires an Azure account and subscription. Azure AI is not affiliated with
Paperless-ngx in any way. When enabled, Paperless-ngx will automatically send appropriate documents to Azure for OCR processing, bypassing
the local OCR engine. See the [configuration](configuration.md#PAPERLESS_REMOTE_OCR_ENGINE) options for more details.
Additionally, when using a commercial service with this feature, consider both potential costs as well as any associated file size
or page limitations (e.g. with a free tier).
## Architecture ## Architecture
Paperless-ngx consists of the following components: Paperless-ngx consists of the following components:

View File

@@ -15,7 +15,6 @@ classifiers = [
# This will allow testing to not install a webserver, mysql, etc # This will allow testing to not install a webserver, mysql, etc
dependencies = [ dependencies = [
"azure-ai-documentintelligence>=1.0.2",
"babel>=2.17", "babel>=2.17",
"bleach~=6.2.0", "bleach~=6.2.0",
"celery[redis]~=5.5.1", "celery[redis]~=5.5.1",
@@ -233,7 +232,6 @@ testpaths = [
"src/paperless_tesseract/tests/", "src/paperless_tesseract/tests/",
"src/paperless_tika/tests", "src/paperless_tika/tests",
"src/paperless_text/tests/", "src/paperless_text/tests/",
"src/paperless_remote/tests/",
] ]
addopts = [ addopts = [
"--pythonwarnings=all", "--pythonwarnings=all",
@@ -243,7 +241,7 @@ addopts = [
"--numprocesses=auto", "--numprocesses=auto",
"--maxprocesses=16", "--maxprocesses=16",
"--quiet", "--quiet",
"--durations=50", "--durations=0",
"--junitxml=junit.xml", "--junitxml=junit.xml",
"-o junit_family=legacy", "-o junit_family=legacy",
] ]
@@ -251,6 +249,10 @@ norecursedirs = [ "src/locale/", ".venv/", "src-ui/" ]
DJANGO_SETTINGS_MODULE = "paperless.settings" DJANGO_SETTINGS_MODULE = "paperless.settings"
markers = [
"use_whitenoise: mark test to run with Whitenoise middleware enabled",
]
[tool.pytest_env] [tool.pytest_env]
PAPERLESS_DISABLE_DBHANDLER = "true" PAPERLESS_DISABLE_DBHANDLER = "true"
PAPERLESS_CACHE_BACKEND = "django.core.cache.backends.locmem.LocMemCache" PAPERLESS_CACHE_BACKEND = "django.core.cache.backends.locmem.LocMemCache"

File diff suppressed because it is too large Load Diff

View File

@@ -35,9 +35,6 @@
@case (CustomFieldDataType.Select) { @case (CustomFieldDataType.Select) {
<span [ngbTooltip]="nameTooltip">{{getSelectValue(field, value)}}</span> <span [ngbTooltip]="nameTooltip">{{getSelectValue(field, value)}}</span>
} }
@case (CustomFieldDataType.LongText) {
<p class="mb-0" [ngbTooltip]="nameTooltip">{{value | slice:0:20}}{{value.length > 20 ? '...' : ''}}</p>
}
@default { @default {
<span [ngbTooltip]="nameTooltip">{{value}}</span> <span [ngbTooltip]="nameTooltip">{{value}}</span>
} }

View File

@@ -1,5 +1,5 @@
import { CurrencyPipe, getLocaleCurrencyCode, SlicePipe } from '@angular/common' import { CurrencyPipe, getLocaleCurrencyCode } from '@angular/common'
import { Component, inject, Input, LOCALE_ID, OnInit } from '@angular/core' import { Component, Input, LOCALE_ID, OnInit, inject } from '@angular/core'
import { NgbTooltipModule } from '@ng-bootstrap/ng-bootstrap' import { NgbTooltipModule } from '@ng-bootstrap/ng-bootstrap'
import { takeUntil } from 'rxjs' import { takeUntil } from 'rxjs'
import { CustomField, CustomFieldDataType } from 'src/app/data/custom-field' import { CustomField, CustomFieldDataType } from 'src/app/data/custom-field'
@@ -14,7 +14,7 @@ import { LoadingComponentWithPermissions } from '../../loading-component/loading
selector: 'pngx-custom-field-display', selector: 'pngx-custom-field-display',
templateUrl: './custom-field-display.component.html', templateUrl: './custom-field-display.component.html',
styleUrl: './custom-field-display.component.scss', styleUrl: './custom-field-display.component.scss',
imports: [CustomDatePipe, CurrencyPipe, NgbTooltipModule, SlicePipe], imports: [CustomDatePipe, CurrencyPipe, NgbTooltipModule],
}) })
export class CustomFieldDisplayComponent export class CustomFieldDisplayComponent
extends LoadingComponentWithPermissions extends LoadingComponentWithPermissions

View File

@@ -177,7 +177,6 @@
<pngx-input-tags [allowCreate]="false" i18n-title title="Has any of tags" formControlName="filter_has_tags"></pngx-input-tags> <pngx-input-tags [allowCreate]="false" i18n-title title="Has any of tags" formControlName="filter_has_tags"></pngx-input-tags>
<pngx-input-select i18n-title title="Has correspondent" [items]="correspondents" [allowNull]="true" formControlName="filter_has_correspondent"></pngx-input-select> <pngx-input-select i18n-title title="Has correspondent" [items]="correspondents" [allowNull]="true" formControlName="filter_has_correspondent"></pngx-input-select>
<pngx-input-select i18n-title title="Has document type" [items]="documentTypes" [allowNull]="true" formControlName="filter_has_document_type"></pngx-input-select> <pngx-input-select i18n-title title="Has document type" [items]="documentTypes" [allowNull]="true" formControlName="filter_has_document_type"></pngx-input-select>
<pngx-input-select i18n-title title="Has storage path" [items]="storagePaths" [allowNull]="true" formControlName="filter_has_storage_path"></pngx-input-select>
</div> </div>
} }
</div> </div>

View File

@@ -412,9 +412,6 @@ export class WorkflowEditDialogComponent
filter_has_document_type: new FormControl( filter_has_document_type: new FormControl(
trigger.filter_has_document_type trigger.filter_has_document_type
), ),
filter_has_storage_path: new FormControl(
trigger.filter_has_storage_path
),
schedule_offset_days: new FormControl(trigger.schedule_offset_days), schedule_offset_days: new FormControl(trigger.schedule_offset_days),
schedule_is_recurring: new FormControl(trigger.schedule_is_recurring), schedule_is_recurring: new FormControl(trigger.schedule_is_recurring),
schedule_recurring_interval_days: new FormControl( schedule_recurring_interval_days: new FormControl(
@@ -539,7 +536,6 @@ export class WorkflowEditDialogComponent
filter_has_tags: [], filter_has_tags: [],
filter_has_correspondent: null, filter_has_correspondent: null,
filter_has_document_type: null, filter_has_document_type: null,
filter_has_storage_path: null,
matching_algorithm: MATCH_NONE, matching_algorithm: MATCH_NONE,
match: '', match: '',
is_insensitive: true, is_insensitive: true,

View File

@@ -68,11 +68,6 @@
[allowNull]="true" [allowNull]="true"
[horizontal]="true"></pngx-input-select> [horizontal]="true"></pngx-input-select>
} }
@case (CustomFieldDataType.LongText) {
<pngx-input-textarea [(ngModel)]="value[fieldId]" (ngModelChange)="onChange(value)"
[title]="getCustomField(fieldId)?.name"
class="flex-grow-1"></pngx-input-textarea>
}
} }
<button type="button" class="btn btn-link text-danger" (click)="removeSelectedField.next(fieldId)"> <button type="button" class="btn btn-link text-danger" (click)="removeSelectedField.next(fieldId)">
<i-bs name="trash"></i-bs> <i-bs name="trash"></i-bs>

View File

@@ -24,7 +24,6 @@ import { MonetaryComponent } from '../monetary/monetary.component'
import { NumberComponent } from '../number/number.component' import { NumberComponent } from '../number/number.component'
import { SelectComponent } from '../select/select.component' import { SelectComponent } from '../select/select.component'
import { TextComponent } from '../text/text.component' import { TextComponent } from '../text/text.component'
import { TextAreaComponent } from '../textarea/textarea.component'
import { UrlComponent } from '../url/url.component' import { UrlComponent } from '../url/url.component'
@Component({ @Component({
@@ -52,7 +51,6 @@ import { UrlComponent } from '../url/url.component'
ReactiveFormsModule, ReactiveFormsModule,
RouterModule, RouterModule,
NgxBootstrapIconsModule, NgxBootstrapIconsModule,
TextAreaComponent,
], ],
}) })
export class CustomFieldsValuesComponent extends AbstractInputComponent<Object> { export class CustomFieldsValuesComponent extends AbstractInputComponent<Object> {

View File

@@ -4,7 +4,6 @@ import {
NG_VALUE_ACCESSOR, NG_VALUE_ACCESSOR,
ReactiveFormsModule, ReactiveFormsModule,
} from '@angular/forms' } from '@angular/forms'
import { NgxBootstrapIconsModule } from 'ngx-bootstrap-icons'
import { SafeHtmlPipe } from 'src/app/pipes/safehtml.pipe' import { SafeHtmlPipe } from 'src/app/pipes/safehtml.pipe'
import { AbstractInputComponent } from '../abstract-input' import { AbstractInputComponent } from '../abstract-input'
@@ -19,12 +18,7 @@ import { AbstractInputComponent } from '../abstract-input'
selector: 'pngx-input-textarea', selector: 'pngx-input-textarea',
templateUrl: './textarea.component.html', templateUrl: './textarea.component.html',
styleUrls: ['./textarea.component.scss'], styleUrls: ['./textarea.component.scss'],
imports: [ imports: [FormsModule, ReactiveFormsModule, SafeHtmlPipe],
FormsModule,
ReactiveFormsModule,
SafeHtmlPipe,
NgxBootstrapIconsModule,
],
}) })
export class TextAreaComponent extends AbstractInputComponent<string> { export class TextAreaComponent extends AbstractInputComponent<string> {
@Input() @Input()

View File

@@ -30,7 +30,7 @@
<div class="page-item rounded p-2" cdkDrag (click)="toggleSelection(i)" [class.selected]="p.selected"> <div class="page-item rounded p-2" cdkDrag (click)="toggleSelection(i)" [class.selected]="p.selected">
<div class="btn-toolbar hover-actions z-10"> <div class="btn-toolbar hover-actions z-10">
<div class="btn-group me-2"> <div class="btn-group me-2">
<button class="btn btn-sm btn-dark" (click)="rotate(i, true); $event.stopPropagation()" title="Rotate page counter-clockwise" i18n-title> <button class="btn btn-sm btn-dark" (click)="rotate(i); $event.stopPropagation()" title="Rotate page counter-clockwise" i18n-title>
<i-bs name="arrow-counterclockwise"></i-bs> <i-bs name="arrow-counterclockwise"></i-bs>
</button> </button>
<button class="btn btn-sm btn-dark" (click)="rotate(i); $event.stopPropagation()" title="Rotate page clockwise" i18n-title> <button class="btn btn-sm btn-dark" (click)="rotate(i); $event.stopPropagation()" title="Rotate page clockwise" i18n-title>

View File

@@ -67,9 +67,8 @@ export class PDFEditorComponent extends ConfirmDialogComponent {
this.pages[i].selected = !this.pages[i].selected this.pages[i].selected = !this.pages[i].selected
} }
rotate(i: number, counterclockwise: boolean = false) { rotate(i: number) {
this.pages[i].rotate = this.pages[i].rotate = (this.pages[i].rotate + 90) % 360
(this.pages[i].rotate + (counterclockwise ? -90 : 90) + 360) % 360
} }
rotateSelected(dir: number) { rotateSelected(dir: number) {

View File

@@ -17,7 +17,7 @@
<i-bs width="0.9em" height="0.9em" name="exclamation-triangle"></i-bs> <i-bs width="0.9em" height="0.9em" name="exclamation-triangle"></i-bs>
} }
<div> <div>
<p class="ms-2 mb-0 text-break">{{toast.content}}</p> <p class="ms-2 mb-0">{{toast.content}}</p>
@if (toast.error) { @if (toast.error) {
<details class="ms-2"> <details class="ms-2">
<div class="mt-2 ms-n4 me-n2 small"> <div class="mt-2 ms-n4 me-n2 small">

View File

@@ -54,10 +54,6 @@
<i-bs width="1em" height="1em" name="arrow-counterclockwise"></i-bs>&nbsp;<span i18n>Reprocess</span> <i-bs width="1em" height="1em" name="arrow-counterclockwise"></i-bs>&nbsp;<span i18n>Reprocess</span>
</button> </button>
<button ngbDropdownItem (click)="printDocument()" [hidden]="useNativePdfViewer || isMobile">
<i-bs width="1em" height="1em" name="printer"></i-bs>&nbsp;<span i18n>Print</span>
</button>
<button ngbDropdownItem (click)="moreLike()"> <button ngbDropdownItem (click)="moreLike()">
<i-bs width="1em" height="1em" name="diagram-3"></i-bs>&nbsp;<span i18n>More like this</span> <i-bs width="1em" height="1em" name="diagram-3"></i-bs>&nbsp;<span i18n>More like this</span>
</button> </button>
@@ -216,14 +212,6 @@
(removed)="removeField(fieldInstance)" (removed)="removeField(fieldInstance)"
[error]="getCustomFieldError(i)"></pngx-input-select> [error]="getCustomFieldError(i)"></pngx-input-select>
} }
@case (CustomFieldDataType.LongText) {
<pngx-input-textarea formControlName="value"
[title]="getCustomFieldFromInstance(fieldInstance)?.name"
[removable]="userCanEdit"
(removed)="removeField(fieldInstance)"
[horizontal]="true"
[error]="getCustomFieldError(i)"></pngx-input-textarea>
}
} }
</div> </div>
} }

View File

@@ -1415,151 +1415,4 @@ describe('DocumentDetailComponent', () => {
.flush('fail', { status: 500, statusText: 'Server Error' }) .flush('fail', { status: 500, statusText: 'Server Error' })
expect(component.previewText).toContain('An error occurred loading content') expect(component.previewText).toContain('An error occurred loading content')
}) })
it('should print document successfully', fakeAsync(() => {
initNormally()
const appendChildSpy = jest
.spyOn(document.body, 'appendChild')
.mockImplementation((node: Node) => node)
const removeChildSpy = jest
.spyOn(document.body, 'removeChild')
.mockImplementation((node: Node) => node)
const createObjectURLSpy = jest
.spyOn(URL, 'createObjectURL')
.mockReturnValue('blob:mock-url')
const revokeObjectURLSpy = jest
.spyOn(URL, 'revokeObjectURL')
.mockImplementation(() => {})
const mockContentWindow = {
focus: jest.fn(),
print: jest.fn(),
onafterprint: null,
}
const mockIframe = {
style: {},
src: '',
onload: null,
contentWindow: mockContentWindow,
}
const createElementSpy = jest
.spyOn(document, 'createElement')
.mockReturnValue(mockIframe as any)
const blob = new Blob(['test'], { type: 'application/pdf' })
component.printDocument()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/${doc.id}/download/`
)
req.flush(blob)
tick()
expect(createElementSpy).toHaveBeenCalledWith('iframe')
expect(appendChildSpy).toHaveBeenCalledWith(mockIframe)
expect(createObjectURLSpy).toHaveBeenCalledWith(blob)
if (mockIframe.onload) {
mockIframe.onload({} as any)
}
expect(mockContentWindow.focus).toHaveBeenCalled()
expect(mockContentWindow.print).toHaveBeenCalled()
if (mockIframe.onload) {
mockIframe.onload(new Event('load'))
}
if (mockContentWindow.onafterprint) {
mockContentWindow.onafterprint(new Event('afterprint'))
}
expect(removeChildSpy).toHaveBeenCalledWith(mockIframe)
expect(revokeObjectURLSpy).toHaveBeenCalledWith('blob:mock-url')
createElementSpy.mockRestore()
appendChildSpy.mockRestore()
removeChildSpy.mockRestore()
createObjectURLSpy.mockRestore()
revokeObjectURLSpy.mockRestore()
}))
it('should show error toast if print document fails', () => {
initNormally()
const toastSpy = jest.spyOn(toastService, 'showError')
component.printDocument()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/${doc.id}/download/`
)
req.error(new ErrorEvent('failed'))
expect(toastSpy).toHaveBeenCalledWith(
'Error loading document for printing.'
)
})
it('should show error toast if printing throws inside iframe', fakeAsync(() => {
initNormally()
const appendChildSpy = jest
.spyOn(document.body, 'appendChild')
.mockImplementation((node: Node) => node)
const removeChildSpy = jest
.spyOn(document.body, 'removeChild')
.mockImplementation((node: Node) => node)
const createObjectURLSpy = jest
.spyOn(URL, 'createObjectURL')
.mockReturnValue('blob:mock-url')
const revokeObjectURLSpy = jest
.spyOn(URL, 'revokeObjectURL')
.mockImplementation(() => {})
const toastSpy = jest.spyOn(toastService, 'showError')
const mockContentWindow = {
focus: jest.fn().mockImplementation(() => {
throw new Error('focus failed')
}),
print: jest.fn(),
onafterprint: null,
}
const mockIframe: any = {
style: {},
src: '',
onload: null,
contentWindow: mockContentWindow,
}
const createElementSpy = jest
.spyOn(document, 'createElement')
.mockReturnValue(mockIframe as any)
const blob = new Blob(['test'], { type: 'application/pdf' })
component.printDocument()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/${doc.id}/download/`
)
req.flush(blob)
tick()
if (mockIframe.onload) {
mockIframe.onload(new Event('load'))
}
expect(toastSpy).toHaveBeenCalled()
expect(removeChildSpy).toHaveBeenCalledWith(mockIframe)
expect(revokeObjectURLSpy).toHaveBeenCalledWith('blob:mock-url')
createElementSpy.mockRestore()
appendChildSpy.mockRestore()
removeChildSpy.mockRestore()
createObjectURLSpy.mockRestore()
revokeObjectURLSpy.mockRestore()
}))
}) })

View File

@@ -98,7 +98,6 @@ import { PermissionsFormComponent } from '../common/input/permissions/permission
import { SelectComponent } from '../common/input/select/select.component' import { SelectComponent } from '../common/input/select/select.component'
import { TagsComponent } from '../common/input/tags/tags.component' import { TagsComponent } from '../common/input/tags/tags.component'
import { TextComponent } from '../common/input/text/text.component' import { TextComponent } from '../common/input/text/text.component'
import { TextAreaComponent } from '../common/input/textarea/textarea.component'
import { UrlComponent } from '../common/input/url/url.component' import { UrlComponent } from '../common/input/url/url.component'
import { PageHeaderComponent } from '../common/page-header/page-header.component' import { PageHeaderComponent } from '../common/page-header/page-header.component'
import { import {
@@ -174,7 +173,6 @@ export enum ZoomSetting {
NgbDropdownModule, NgbDropdownModule,
NgxBootstrapIconsModule, NgxBootstrapIconsModule,
PdfViewerModule, PdfViewerModule,
TextAreaComponent,
], ],
}) })
export class DocumentDetailComponent export class DocumentDetailComponent
@@ -293,10 +291,6 @@ export class DocumentDetailComponent
return this.settings.get(SETTINGS_KEYS.USE_NATIVE_PDF_VIEWER) return this.settings.get(SETTINGS_KEYS.USE_NATIVE_PDF_VIEWER)
} }
get isMobile(): boolean {
return this.deviceDetectorService.isMobile()
}
get archiveContentRenderType(): ContentRenderType { get archiveContentRenderType(): ContentRenderType {
return this.document?.archived_file_name return this.document?.archived_file_name
? this.getRenderType('application/pdf') ? this.getRenderType('application/pdf')
@@ -1425,44 +1419,6 @@ export class DocumentDetailComponent
}) })
} }
printDocument() {
const printUrl = this.documentsService.getDownloadUrl(
this.document.id,
false
)
this.http
.get(printUrl, { responseType: 'blob' })
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe({
next: (blob) => {
const blobUrl = URL.createObjectURL(blob)
const iframe = document.createElement('iframe')
iframe.style.display = 'none'
iframe.src = blobUrl
document.body.appendChild(iframe)
iframe.onload = () => {
try {
iframe.contentWindow.focus()
iframe.contentWindow.print()
iframe.contentWindow.onafterprint = () => {
document.body.removeChild(iframe)
URL.revokeObjectURL(blobUrl)
}
} catch (err) {
this.toastService.showError($localize`Print failed.`, err)
document.body.removeChild(iframe)
URL.revokeObjectURL(blobUrl)
}
}
},
error: () => {
this.toastService.showError(
$localize`Error loading document for printing.`
)
},
})
}
public openShareLinks() { public openShareLinks() {
const modal = this.modalService.open(ShareLinksDialogComponent) const modal = this.modalService.open(ShareLinksDialogComponent)
modal.componentInstance.documentId = this.document.id modal.componentInstance.documentId = this.document.id

View File

@@ -56,10 +56,6 @@
[items]="field.extra_data.select_options" bindLabel="label" [allowNull]="true" [horizontal]="true"> [items]="field.extra_data.select_options" bindLabel="label" [allowNull]="true" [horizontal]="true">
</pngx-input-select> </pngx-input-select>
} }
@case (CustomFieldDataType.LongText) {
<pngx-input-textarea formControlName="{{field.id}}" class="w-100" [title]="field.name" [horizontal]="true">
</pngx-input-textarea>
}
} }
<button type="button" class="btn btn-outline-danger mb-3" (click)="removeField(field.id)"> <button type="button" class="btn btn-outline-danger mb-3" (click)="removeField(field.id)">
<i-bs name="x"></i-bs> <i-bs name="x"></i-bs>

View File

@@ -18,7 +18,6 @@ import { TextComponent } from 'src/app/components/common/input/text/text.compone
import { UrlComponent } from 'src/app/components/common/input/url/url.component' import { UrlComponent } from 'src/app/components/common/input/url/url.component'
import { CustomField, CustomFieldDataType } from 'src/app/data/custom-field' import { CustomField, CustomFieldDataType } from 'src/app/data/custom-field'
import { DocumentService } from 'src/app/services/rest/document.service' import { DocumentService } from 'src/app/services/rest/document.service'
import { TextAreaComponent } from '../../../common/input/textarea/textarea.component'
@Component({ @Component({
selector: 'pngx-custom-fields-bulk-edit-dialog', selector: 'pngx-custom-fields-bulk-edit-dialog',
@@ -36,7 +35,6 @@ import { TextAreaComponent } from '../../../common/input/textarea/textarea.compo
FormsModule, FormsModule,
ReactiveFormsModule, ReactiveFormsModule,
NgxBootstrapIconsModule, NgxBootstrapIconsModule,
TextAreaComponent,
], ],
}) })
export class CustomFieldsBulkEditDialogComponent { export class CustomFieldsBulkEditDialogComponent {

View File

@@ -114,10 +114,6 @@ export const CUSTOM_FIELD_QUERY_OPERATOR_GROUPS_BY_TYPE = {
CustomFieldQueryOperatorGroups.Exact, CustomFieldQueryOperatorGroups.Exact,
CustomFieldQueryOperatorGroups.Subset, CustomFieldQueryOperatorGroups.Subset,
], ],
[CustomFieldDataType.LongText]: [
CustomFieldQueryOperatorGroups.Basic,
CustomFieldQueryOperatorGroups.String,
],
} }
export const CUSTOM_FIELD_QUERY_VALUE_TYPES_BY_OPERATOR = { export const CUSTOM_FIELD_QUERY_VALUE_TYPES_BY_OPERATOR = {

View File

@@ -10,7 +10,6 @@ export enum CustomFieldDataType {
Monetary = 'monetary', Monetary = 'monetary',
DocumentLink = 'documentlink', DocumentLink = 'documentlink',
Select = 'select', Select = 'select',
LongText = 'longtext',
} }
export const DATA_TYPE_LABELS = [ export const DATA_TYPE_LABELS = [
@@ -50,10 +49,6 @@ export const DATA_TYPE_LABELS = [
id: CustomFieldDataType.Select, id: CustomFieldDataType.Select,
name: $localize`Select`, name: $localize`Select`,
}, },
{
id: CustomFieldDataType.LongText,
name: $localize`Long Text`,
},
] ]
export interface CustomField extends ObjectWithId { export interface CustomField extends ObjectWithId {

View File

@@ -44,8 +44,6 @@ export interface WorkflowTrigger extends ObjectWithId {
filter_has_document_type?: number // DocumentType.id filter_has_document_type?: number // DocumentType.id
filter_has_storage_path?: number // StoragePath.id
schedule_offset_days?: number schedule_offset_days?: number
schedule_is_recurring?: boolean schedule_is_recurring?: boolean

View File

@@ -110,7 +110,6 @@ import {
playFill, playFill,
plus, plus,
plusCircle, plusCircle,
printer,
questionCircle, questionCircle,
scissors, scissors,
search, search,
@@ -320,7 +319,6 @@ const icons = {
playFill, playFill,
plus, plus,
plusCircle, plusCircle,
printer,
questionCircle, questionCircle,
scissors, scissors,
search, search,

View File

@@ -181,7 +181,6 @@ def modify_custom_fields(
defaults[value_field] = value defaults[value_field] = value
if ( if (
custom_field.data_type == CustomField.FieldDataType.DOCUMENTLINK custom_field.data_type == CustomField.FieldDataType.DOCUMENTLINK
and value
and doc_id in value and doc_id in value
): ):
# Prevent self-linking # Prevent self-linking

View File

@@ -230,7 +230,6 @@ class CustomFieldsFilter(Filter):
| qs.filter(custom_fields__value_monetary__icontains=value) | qs.filter(custom_fields__value_monetary__icontains=value)
| qs.filter(custom_fields__value_document_ids__icontains=value) | qs.filter(custom_fields__value_document_ids__icontains=value)
| qs.filter(custom_fields__value_select__in=option_ids) | qs.filter(custom_fields__value_select__in=option_ids)
| qs.filter(custom_fields__value_long_text__icontains=value)
) )
else: else:
return qs return qs
@@ -315,7 +314,6 @@ class CustomFieldQueryParser:
CustomField.FieldDataType.MONETARY: ("basic", "string", "arithmetic"), CustomField.FieldDataType.MONETARY: ("basic", "string", "arithmetic"),
CustomField.FieldDataType.DOCUMENTLINK: ("basic", "containment"), CustomField.FieldDataType.DOCUMENTLINK: ("basic", "containment"),
CustomField.FieldDataType.SELECT: ("basic",), CustomField.FieldDataType.SELECT: ("basic",),
CustomField.FieldDataType.LONG_TEXT: ("basic", "string"),
} }
DATE_COMPONENTS = [ DATE_COMPONENTS = [
@@ -847,10 +845,7 @@ class DocumentsOrderingFilter(OrderingFilter):
annotation = None annotation = None
match field.data_type: match field.data_type:
case ( case CustomField.FieldDataType.STRING:
CustomField.FieldDataType.STRING
| CustomField.FieldDataType.LONG_TEXT
):
annotation = Subquery( annotation = Subquery(
CustomFieldInstance.objects.filter( CustomFieldInstance.objects.filter(
document_id=OuterRef("id"), document_id=OuterRef("id"),

View File

@@ -386,16 +386,6 @@ def existing_document_matches_workflow(
) )
trigger_matched = False trigger_matched = False
# Document storage_path vs trigger has_storage_path
if (
trigger.filter_has_storage_path is not None
and document.storage_path != trigger.filter_has_storage_path
):
reason = (
f"Document storage path {document.storage_path} does not match {trigger.filter_has_storage_path}",
)
trigger_matched = False
# Document original_filename vs trigger filename # Document original_filename vs trigger filename
if ( if (
trigger.filter_filename is not None trigger.filter_filename is not None
@@ -440,11 +430,6 @@ def prefilter_documents_by_workflowtrigger(
document_type=trigger.filter_has_document_type, document_type=trigger.filter_has_document_type,
) )
if trigger.filter_has_storage_path is not None:
documents = documents.filter(
storage_path=trigger.filter_has_storage_path,
)
if trigger.filter_filename is not None and len(trigger.filter_filename) > 0: if trigger.filter_filename is not None and len(trigger.filter_filename) > 0:
# the true fnmatch will actually run later so we just want a loose filter here # the true fnmatch will actually run later so we just want a loose filter here
regex = fnmatch_translate(trigger.filter_filename).lstrip("^").rstrip("$") regex = fnmatch_translate(trigger.filter_filename).lstrip("^").rstrip("$")

View File

@@ -1,35 +0,0 @@
# Generated by Django 5.2.6 on 2025-09-11 17:29
import django.db.models.deletion
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "1068_alter_document_created"),
]
operations = [
migrations.AddField(
model_name="workflowtrigger",
name="filter_has_storage_path",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to="documents.storagepath",
verbose_name="has this storage path",
),
),
migrations.AlterField(
model_name="workflowaction",
name="assign_title",
field=models.TextField(
blank=True,
help_text="Assign a document title, must be a Jinja2 template, see documentation.",
null=True,
verbose_name="assign title",
),
),
]

View File

@@ -1,39 +0,0 @@
# Generated by Django 5.2.6 on 2025-09-13 17:11
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "1069_workflowtrigger_filter_has_storage_path_and_more"),
]
operations = [
migrations.AddField(
model_name="customfieldinstance",
name="value_long_text",
field=models.TextField(null=True),
),
migrations.AlterField(
model_name="customfield",
name="data_type",
field=models.CharField(
choices=[
("string", "String"),
("url", "URL"),
("date", "Date"),
("boolean", "Boolean"),
("integer", "Integer"),
("float", "Float"),
("monetary", "Monetary"),
("documentlink", "Document Link"),
("select", "Select"),
("longtext", "Long Text"),
],
editable=False,
max_length=50,
verbose_name="data type",
),
),
]

View File

@@ -759,7 +759,6 @@ class CustomField(models.Model):
MONETARY = ("monetary", _("Monetary")) MONETARY = ("monetary", _("Monetary"))
DOCUMENTLINK = ("documentlink", _("Document Link")) DOCUMENTLINK = ("documentlink", _("Document Link"))
SELECT = ("select", _("Select")) SELECT = ("select", _("Select"))
LONG_TEXT = ("longtext", _("Long Text"))
created = models.DateTimeField( created = models.DateTimeField(
_("created"), _("created"),
@@ -817,7 +816,6 @@ class CustomFieldInstance(SoftDeleteModel):
CustomField.FieldDataType.MONETARY: "value_monetary", CustomField.FieldDataType.MONETARY: "value_monetary",
CustomField.FieldDataType.DOCUMENTLINK: "value_document_ids", CustomField.FieldDataType.DOCUMENTLINK: "value_document_ids",
CustomField.FieldDataType.SELECT: "value_select", CustomField.FieldDataType.SELECT: "value_select",
CustomField.FieldDataType.LONG_TEXT: "value_long_text",
} }
created = models.DateTimeField( created = models.DateTimeField(
@@ -885,8 +883,6 @@ class CustomFieldInstance(SoftDeleteModel):
value_select = models.CharField(null=True, max_length=16) value_select = models.CharField(null=True, max_length=16)
value_long_text = models.TextField(null=True)
class Meta: class Meta:
ordering = ("created",) ordering = ("created",)
verbose_name = _("custom field instance") verbose_name = _("custom field instance")
@@ -1048,14 +1044,6 @@ class WorkflowTrigger(models.Model):
verbose_name=_("has this correspondent"), verbose_name=_("has this correspondent"),
) )
filter_has_storage_path = models.ForeignKey(
StoragePath,
null=True,
blank=True,
on_delete=models.SET_NULL,
verbose_name=_("has this storage path"),
)
schedule_offset_days = models.IntegerField( schedule_offset_days = models.IntegerField(
_("schedule offset days"), _("schedule offset days"),
default=0, default=0,
@@ -1219,12 +1207,14 @@ class WorkflowAction(models.Model):
default=WorkflowActionType.ASSIGNMENT, default=WorkflowActionType.ASSIGNMENT,
) )
assign_title = models.TextField( assign_title = models.CharField(
_("assign title"), _("assign title"),
max_length=256,
null=True, null=True,
blank=True, blank=True,
help_text=_( help_text=_(
"Assign a document title, must be a Jinja2 template, see documentation.", "Assign a document title, can include some placeholders, "
"see documentation.",
), ),
) )

View File

@@ -2054,7 +2054,6 @@ class WorkflowTriggerSerializer(serializers.ModelSerializer):
"filter_has_tags", "filter_has_tags",
"filter_has_correspondent", "filter_has_correspondent",
"filter_has_document_type", "filter_has_document_type",
"filter_has_storage_path",
"schedule_offset_days", "schedule_offset_days",
"schedule_is_recurring", "schedule_is_recurring",
"schedule_recurring_interval_days", "schedule_recurring_interval_days",

View File

@@ -1,27 +0,0 @@
from jinja2.sandbox import SandboxedEnvironment
class JinjaEnvironment(SandboxedEnvironment):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.undefined_tracker = None
def is_safe_callable(self, obj):
# Block access to .save() and .delete() methods
if callable(obj) and getattr(obj, "__name__", None) in (
"save",
"delete",
"update",
):
return False
# Call the parent method for other cases
return super().is_safe_callable(obj)
_template_environment = JinjaEnvironment(
trim_blocks=True,
lstrip_blocks=True,
keep_trailing_newline=False,
autoescape=False,
extensions=["jinja2.ext.loopcontrols"],
)

View File

@@ -2,16 +2,22 @@ import logging
import os import os
import re import re
from collections.abc import Iterable from collections.abc import Iterable
from datetime import date
from datetime import datetime
from pathlib import PurePath from pathlib import PurePath
import pathvalidate import pathvalidate
from babel import Locale
from babel import dates
from django.utils import timezone from django.utils import timezone
from django.utils.dateparse import parse_date
from django.utils.text import slugify as django_slugify from django.utils.text import slugify as django_slugify
from jinja2 import StrictUndefined from jinja2 import StrictUndefined
from jinja2 import Template from jinja2 import Template
from jinja2 import TemplateSyntaxError from jinja2 import TemplateSyntaxError
from jinja2 import UndefinedError from jinja2 import UndefinedError
from jinja2 import make_logging_undefined from jinja2 import make_logging_undefined
from jinja2.sandbox import SandboxedEnvironment
from jinja2.sandbox import SecurityError from jinja2.sandbox import SecurityError
from documents.models import Correspondent from documents.models import Correspondent
@@ -21,16 +27,39 @@ from documents.models import Document
from documents.models import DocumentType from documents.models import DocumentType
from documents.models import StoragePath from documents.models import StoragePath
from documents.models import Tag from documents.models import Tag
from documents.templating.environment import _template_environment
from documents.templating.filters import format_datetime
from documents.templating.filters import get_cf_value
from documents.templating.filters import localize_date
logger = logging.getLogger("paperless.templating") logger = logging.getLogger("paperless.templating")
_LogStrictUndefined = make_logging_undefined(logger, StrictUndefined) _LogStrictUndefined = make_logging_undefined(logger, StrictUndefined)
class FilePathEnvironment(SandboxedEnvironment):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.undefined_tracker = None
def is_safe_callable(self, obj):
# Block access to .save() and .delete() methods
if callable(obj) and getattr(obj, "__name__", None) in (
"save",
"delete",
"update",
):
return False
# Call the parent method for other cases
return super().is_safe_callable(obj)
_template_environment = FilePathEnvironment(
trim_blocks=True,
lstrip_blocks=True,
keep_trailing_newline=False,
autoescape=False,
extensions=["jinja2.ext.loopcontrols"],
undefined=_LogStrictUndefined,
)
class FilePathTemplate(Template): class FilePathTemplate(Template):
def render(self, *args, **kwargs) -> str: def render(self, *args, **kwargs) -> str:
def clean_filepath(value: str) -> str: def clean_filepath(value: str) -> str:
@@ -52,7 +81,54 @@ class FilePathTemplate(Template):
return clean_filepath(original_render) return clean_filepath(original_render)
_template_environment.undefined = _LogStrictUndefined def get_cf_value(
custom_field_data: dict[str, dict[str, str]],
name: str,
default: str | None = None,
) -> str | None:
if name in custom_field_data and custom_field_data[name]["value"] is not None:
return custom_field_data[name]["value"]
elif default is not None:
return default
return None
def format_datetime(value: str | datetime, format: str) -> str:
if isinstance(value, str):
value = parse_date(value)
return value.strftime(format=format)
def localize_date(value: date | datetime, format: str, locale: str) -> str:
"""
Format a date or datetime object into a localized string using Babel.
Args:
value (date | datetime): The date or datetime to format. If a datetime
is provided, it should be timezone-aware (e.g., UTC from a Django DB object).
format (str): The format to use. Can be one of Babel's preset formats
('short', 'medium', 'long', 'full') or a custom pattern string.
locale (str): The locale code (e.g., 'en_US', 'fr_FR') to use for
localization.
Returns:
str: The localized, formatted date string.
Raises:
TypeError: If `value` is not a date or datetime instance.
"""
try:
Locale.parse(locale)
except Exception as e:
raise ValueError(f"Invalid locale identifier: {locale}") from e
if isinstance(value, datetime):
return dates.format_datetime(value, format=format, locale=locale)
elif isinstance(value, date):
return dates.format_date(value, format=format, locale=locale)
else:
raise TypeError(f"Unsupported type {type(value)} for localize_date")
_template_environment.filters["get_cf_value"] = get_cf_value _template_environment.filters["get_cf_value"] = get_cf_value
@@ -202,7 +278,6 @@ def get_custom_fields_context(
CustomField.FieldDataType.MONETARY, CustomField.FieldDataType.MONETARY,
CustomField.FieldDataType.STRING, CustomField.FieldDataType.STRING,
CustomField.FieldDataType.URL, CustomField.FieldDataType.URL,
CustomField.FieldDataType.LONG_TEXT,
}: }:
value = pathvalidate.sanitize_filename( value = pathvalidate.sanitize_filename(
field_instance.value, field_instance.value,

View File

@@ -1,60 +0,0 @@
from datetime import date
from datetime import datetime
from babel import Locale
from babel import dates
from django.utils.dateparse import parse_date
from django.utils.dateparse import parse_datetime
def localize_date(value: date | datetime | str, format: str, locale: str) -> str:
"""
Format a date, datetime or str object into a localized string using Babel.
Args:
value (date | datetime | str): The date or datetime to format. If a datetime
is provided, it should be timezone-aware (e.g., UTC from a Django DB object).
if str is provided is is parsed as date.
format (str): The format to use. Can be one of Babel's preset formats
('short', 'medium', 'long', 'full') or a custom pattern string.
locale (str): The locale code (e.g., 'en_US', 'fr_FR') to use for
localization.
Returns:
str: The localized, formatted date string.
Raises:
TypeError: If `value` is not a date, datetime or str instance.
"""
if isinstance(value, str):
value = parse_datetime(value)
try:
Locale.parse(locale)
except Exception as e:
raise ValueError(f"Invalid locale identifier: {locale}") from e
if isinstance(value, datetime):
return dates.format_datetime(value, format=format, locale=locale)
elif isinstance(value, date):
return dates.format_date(value, format=format, locale=locale)
else:
raise TypeError(f"Unsupported type {type(value)} for localize_date")
def format_datetime(value: str | datetime, format: str) -> str:
if isinstance(value, str):
value = parse_date(value)
return value.strftime(format=format)
def get_cf_value(
custom_field_data: dict[str, dict[str, str]],
name: str,
default: str | None = None,
) -> str | None:
if name in custom_field_data and custom_field_data[name]["value"] is not None:
return custom_field_data[name]["value"]
elif default is not None:
return default
return None

View File

@@ -1,33 +1,7 @@
import logging
from datetime import date from datetime import date
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
from django.utils.text import slugify as django_slugify
from jinja2 import StrictUndefined
from jinja2 import Template
from jinja2 import TemplateSyntaxError
from jinja2 import UndefinedError
from jinja2 import make_logging_undefined
from jinja2.sandbox import SecurityError
from documents.templating.environment import _template_environment
from documents.templating.filters import format_datetime
from documents.templating.filters import localize_date
logger = logging.getLogger("paperless.templating")
_LogStrictUndefined = make_logging_undefined(logger, StrictUndefined)
_template_environment.undefined = _LogStrictUndefined
_template_environment.filters["datetime"] = format_datetime
_template_environment.filters["slugify"] = django_slugify
_template_environment.filters["localize_date"] = localize_date
def parse_w_workflow_placeholders( def parse_w_workflow_placeholders(
text: str, text: str,
@@ -46,7 +20,6 @@ def parse_w_workflow_placeholders(
e.g. for pre-consumption triggers created will not have been parsed yet, but it will e.g. for pre-consumption triggers created will not have been parsed yet, but it will
for added / updated triggers for added / updated triggers
""" """
formatting = { formatting = {
"correspondent": correspondent_name, "correspondent": correspondent_name,
"document_type": doc_type_name, "document_type": doc_type_name,
@@ -79,28 +52,4 @@ def parse_w_workflow_placeholders(
formatting.update({"doc_title": doc_title}) formatting.update({"doc_title": doc_title})
if doc_url is not None: if doc_url is not None:
formatting.update({"doc_url": doc_url}) formatting.update({"doc_url": doc_url})
return text.format(**formatting).strip()
logger.debug(f"Jinja Template is : {text}")
try:
template = _template_environment.from_string(
text,
template_class=Template,
)
rendered_template = template.render(formatting)
# We're good!
return rendered_template
except UndefinedError as e:
# The undefined class logs this already for us
raise e
except TemplateSyntaxError as e:
logger.warning(f"Template syntax error in title generation: {e}")
except SecurityError as e:
logger.warning(f"Template attempted restricted operation: {e}")
except Exception as e:
logger.warning(f"Unknown error in title generation: {e}")
logger.warning(
f"Invalid title format '{text}', workflow not applied: {e}",
)
raise e
return None

View File

@@ -28,3 +28,22 @@ def authenticated_rest_api_client(rest_api_client: APIClient):
user = UserModel.objects.create_user(username="testuser", password="password") user = UserModel.objects.create_user(username="testuser", password="password")
rest_api_client.force_authenticate(user=user) rest_api_client.force_authenticate(user=user)
yield rest_api_client yield rest_api_client
@pytest.fixture(autouse=True)
def configure_whitenoise_middleware(request, settings):
"""
By default, remove Whitenoise middleware from tests.
Only include it when test is marked with @pytest.mark.use_whitenoise
"""
# Check if the test is marked to use whitenoise
use_whitenoise_marker = request.node.get_closest_marker("use_whitenoise")
if not use_whitenoise_marker:
# Filter out whitenoise middleware using pytest-django's settings fixture
middleware_without_whitenoise = [
mw for mw in settings.MIDDLEWARE if "whitenoisemiddleware" not in mw.lower()
]
settings.MIDDLEWARE = middleware_without_whitenoise
yield

View File

@@ -186,7 +186,6 @@ class TestApiWorkflows(DirectoriesMixin, APITestCase):
"filter_has_tags": [self.t1.id], "filter_has_tags": [self.t1.id],
"filter_has_document_type": self.dt.id, "filter_has_document_type": self.dt.id,
"filter_has_correspondent": self.c.id, "filter_has_correspondent": self.c.id,
"filter_has_storage_path": self.sp.id,
}, },
], ],
"actions": [ "actions": [

View File

@@ -304,6 +304,22 @@ class TestConsumer(
self.assertEqual(document.title, "Override Title") self.assertEqual(document.title, "Override Title")
self._assert_first_last_send_progress() self._assert_first_last_send_progress()
def testOverrideTitleInvalidPlaceholders(self):
with self.assertLogs("paperless.consumer", level="ERROR") as cm:
with self.get_consumer(
self.get_test_file(),
DocumentMetadataOverrides(title="Override {correspondent]"),
) as consumer:
consumer.run()
document = Document.objects.first()
self.assertIsNotNone(document)
self.assertEqual(document.title, "sample")
expected_str = "Error occurred parsing title override 'Override {correspondent]', falling back to original"
self.assertIn(expected_str, cm.output[0])
def testOverrideCorrespondent(self): def testOverrideCorrespondent(self):
c = Correspondent.objects.create(name="test") c = Correspondent.objects.create(name="test")
@@ -421,7 +437,7 @@ class TestConsumer(
DocumentMetadataOverrides( DocumentMetadataOverrides(
correspondent_id=c.pk, correspondent_id=c.pk,
document_type_id=dt.pk, document_type_id=dt.pk,
title="{{correspondent}}{{document_type}} {{added_month}}-{{added_year_short}}", title="{correspondent}{document_type} {added_month}-{added_year_short}",
), ),
) as consumer: ) as consumer:
consumer.run() consumer.run()

View File

@@ -18,17 +18,14 @@ class TestDocument(TestCase):
self.originals_dir = tempfile.mkdtemp() self.originals_dir = tempfile.mkdtemp()
self.thumb_dir = tempfile.mkdtemp() self.thumb_dir = tempfile.mkdtemp()
self.overrides = override_settings( override_settings(
ORIGINALS_DIR=self.originals_dir, ORIGINALS_DIR=self.originals_dir,
THUMBNAIL_DIR=self.thumb_dir, THUMBNAIL_DIR=self.thumb_dir,
) ).enable()
self.overrides.enable()
def tearDown(self) -> None: def tearDown(self) -> None:
shutil.rmtree(self.originals_dir) shutil.rmtree(self.originals_dir)
shutil.rmtree(self.thumb_dir) shutil.rmtree(self.thumb_dir)
self.overrides.disable()
def test_file_deletion(self): def test_file_deletion(self):
document = Document.objects.create( document = Document.objects.create(

View File

@@ -23,6 +23,7 @@ from documents.models import Document
from documents.models import DocumentType from documents.models import DocumentType
from documents.models import StoragePath from documents.models import StoragePath
from documents.tasks import empty_trash from documents.tasks import empty_trash
from documents.templating.filepath import localize_date
from documents.tests.factories import DocumentFactory from documents.tests.factories import DocumentFactory
from documents.tests.utils import DirectoriesMixin from documents.tests.utils import DirectoriesMixin
from documents.tests.utils import FileSystemAssertsMixin from documents.tests.utils import FileSystemAssertsMixin
@@ -1590,13 +1591,166 @@ class TestFilenameGeneration(DirectoriesMixin, TestCase):
) )
class TestPathDateLocalization: class TestDateLocalization:
""" """
Groups all tests related to the `localize_date` function. Groups all tests related to the `localize_date` function.
""" """
TEST_DATE = datetime.date(2023, 10, 26) TEST_DATE = datetime.date(2023, 10, 26)
TEST_DATETIME = datetime.datetime(
2023,
10,
26,
14,
30,
5,
tzinfo=datetime.timezone.utc,
)
@pytest.mark.parametrize(
"value, format_style, locale_str, expected_output",
[
pytest.param(
TEST_DATE,
"EEEE, MMM d, yyyy",
"en_US",
"Thursday, Oct 26, 2023",
id="date-en_US-custom",
),
pytest.param(
TEST_DATE,
"dd.MM.yyyy",
"de_DE",
"26.10.2023",
id="date-de_DE-custom",
),
# German weekday and month name translation
pytest.param(
TEST_DATE,
"EEEE",
"de_DE",
"Donnerstag",
id="weekday-de_DE",
),
pytest.param(
TEST_DATE,
"MMMM",
"de_DE",
"Oktober",
id="month-de_DE",
),
# French weekday and month name translation
pytest.param(
TEST_DATE,
"EEEE",
"fr_FR",
"jeudi",
id="weekday-fr_FR",
),
pytest.param(
TEST_DATE,
"MMMM",
"fr_FR",
"octobre",
id="month-fr_FR",
),
],
)
def test_localize_date_with_date_objects(
self,
value: datetime.date,
format_style: str,
locale_str: str,
expected_output: str,
):
"""
Tests `localize_date` with `date` objects across different locales and formats.
"""
assert localize_date(value, format_style, locale_str) == expected_output
@pytest.mark.parametrize(
"value, format_style, locale_str, expected_output",
[
pytest.param(
TEST_DATETIME,
"yyyy.MM.dd G 'at' HH:mm:ss zzz",
"en_US",
"2023.10.26 AD at 14:30:05 UTC",
id="datetime-en_US-custom",
),
pytest.param(
TEST_DATETIME,
"dd.MM.yyyy",
"fr_FR",
"26.10.2023",
id="date-fr_FR-custom",
),
# Spanish weekday and month translation
pytest.param(
TEST_DATETIME,
"EEEE",
"es_ES",
"jueves",
id="weekday-es_ES",
),
pytest.param(
TEST_DATETIME,
"MMMM",
"es_ES",
"octubre",
id="month-es_ES",
),
# Italian weekday and month translation
pytest.param(
TEST_DATETIME,
"EEEE",
"it_IT",
"giovedì",
id="weekday-it_IT",
),
pytest.param(
TEST_DATETIME,
"MMMM",
"it_IT",
"ottobre",
id="month-it_IT",
),
],
)
def test_localize_date_with_datetime_objects(
self,
value: datetime.datetime,
format_style: str,
locale_str: str,
expected_output: str,
):
# To handle the non-breaking space in French and other locales
result = localize_date(value, format_style, locale_str)
assert result.replace("\u202f", " ") == expected_output.replace("\u202f", " ")
@pytest.mark.parametrize(
"invalid_value",
[
"2023-10-26",
1698330605,
None,
[],
{},
],
)
def test_localize_date_raises_type_error_for_invalid_input(self, invalid_value):
with pytest.raises(TypeError) as excinfo:
localize_date(invalid_value, "medium", "en_US")
assert f"Unsupported type {type(invalid_value)}" in str(excinfo.value)
def test_localize_date_raises_error_for_invalid_locale(self):
with pytest.raises(ValueError) as excinfo:
localize_date(self.TEST_DATE, "medium", "invalid_locale_code")
assert "Invalid locale identifier" in str(excinfo.value)
@pytest.mark.django_db @pytest.mark.django_db
@pytest.mark.parametrize( @pytest.mark.parametrize(
"filename_format,expected_filename", "filename_format,expected_filename",

View File

@@ -1,296 +0,0 @@
import datetime
from typing import Any
from typing import Literal
import pytest
from documents.templating.filters import localize_date
class TestDateLocalization:
"""
Groups all tests related to the `localize_date` function.
"""
TEST_DATE = datetime.date(2023, 10, 26)
TEST_DATETIME = datetime.datetime(
2023,
10,
26,
14,
30,
5,
tzinfo=datetime.timezone.utc,
)
TEST_DATETIME_STRING: str = "2023-10-26T14:30:05+00:00"
TEST_DATE_STRING: str = "2023-10-26"
@pytest.mark.parametrize(
"value, format_style, locale_str, expected_output",
[
pytest.param(
TEST_DATE,
"EEEE, MMM d, yyyy",
"en_US",
"Thursday, Oct 26, 2023",
id="date-en_US-custom",
),
pytest.param(
TEST_DATE,
"dd.MM.yyyy",
"de_DE",
"26.10.2023",
id="date-de_DE-custom",
),
# German weekday and month name translation
pytest.param(
TEST_DATE,
"EEEE",
"de_DE",
"Donnerstag",
id="weekday-de_DE",
),
pytest.param(
TEST_DATE,
"MMMM",
"de_DE",
"Oktober",
id="month-de_DE",
),
# French weekday and month name translation
pytest.param(
TEST_DATE,
"EEEE",
"fr_FR",
"jeudi",
id="weekday-fr_FR",
),
pytest.param(
TEST_DATE,
"MMMM",
"fr_FR",
"octobre",
id="month-fr_FR",
),
],
)
def test_localize_date_with_date_objects(
self,
value: datetime.date,
format_style: str,
locale_str: str,
expected_output: str,
):
"""
Tests `localize_date` with `date` objects across different locales and formats.
"""
assert localize_date(value, format_style, locale_str) == expected_output
@pytest.mark.parametrize(
"value, format_style, locale_str, expected_output",
[
pytest.param(
TEST_DATETIME,
"yyyy.MM.dd G 'at' HH:mm:ss zzz",
"en_US",
"2023.10.26 AD at 14:30:05 UTC",
id="datetime-en_US-custom",
),
pytest.param(
TEST_DATETIME,
"dd.MM.yyyy",
"fr_FR",
"26.10.2023",
id="date-fr_FR-custom",
),
# Spanish weekday and month translation
pytest.param(
TEST_DATETIME,
"EEEE",
"es_ES",
"jueves",
id="weekday-es_ES",
),
pytest.param(
TEST_DATETIME,
"MMMM",
"es_ES",
"octubre",
id="month-es_ES",
),
# Italian weekday and month translation
pytest.param(
TEST_DATETIME,
"EEEE",
"it_IT",
"giovedì",
id="weekday-it_IT",
),
pytest.param(
TEST_DATETIME,
"MMMM",
"it_IT",
"ottobre",
id="month-it_IT",
),
],
)
def test_localize_date_with_datetime_objects(
self,
value: datetime.datetime,
format_style: str,
locale_str: str,
expected_output: str,
):
# To handle the non-breaking space in French and other locales
result = localize_date(value, format_style, locale_str)
assert result.replace("\u202f", " ") == expected_output.replace("\u202f", " ")
@pytest.mark.parametrize(
"invalid_value",
[
1698330605,
None,
[],
{},
],
)
def test_localize_date_raises_type_error_for_invalid_input(
self,
invalid_value: None | list[object] | dict[Any, Any] | Literal[1698330605],
):
with pytest.raises(TypeError) as excinfo:
localize_date(invalid_value, "medium", "en_US")
assert f"Unsupported type {type(invalid_value)}" in str(excinfo.value)
def test_localize_date_raises_error_for_invalid_locale(self):
with pytest.raises(ValueError) as excinfo:
localize_date(self.TEST_DATE, "medium", "invalid_locale_code")
assert "Invalid locale identifier" in str(excinfo.value)
@pytest.mark.parametrize(
"value, format_style, locale_str, expected_output",
[
pytest.param(
TEST_DATETIME_STRING,
"EEEE, MMM d, yyyy",
"en_US",
"Thursday, Oct 26, 2023",
id="date-en_US-custom",
),
pytest.param(
TEST_DATETIME_STRING,
"dd.MM.yyyy",
"de_DE",
"26.10.2023",
id="date-de_DE-custom",
),
# German weekday and month name translation
pytest.param(
TEST_DATETIME_STRING,
"EEEE",
"de_DE",
"Donnerstag",
id="weekday-de_DE",
),
pytest.param(
TEST_DATETIME_STRING,
"MMMM",
"de_DE",
"Oktober",
id="month-de_DE",
),
# French weekday and month name translation
pytest.param(
TEST_DATETIME_STRING,
"EEEE",
"fr_FR",
"jeudi",
id="weekday-fr_FR",
),
pytest.param(
TEST_DATETIME_STRING,
"MMMM",
"fr_FR",
"octobre",
id="month-fr_FR",
),
],
)
def test_localize_date_with_datetime_string(
self,
value: str,
format_style: str,
locale_str: str,
expected_output: str,
):
"""
Tests `localize_date` with `date` string across different locales and formats.
"""
assert localize_date(value, format_style, locale_str) == expected_output
@pytest.mark.parametrize(
"value, format_style, locale_str, expected_output",
[
pytest.param(
TEST_DATE_STRING,
"EEEE, MMM d, yyyy",
"en_US",
"Thursday, Oct 26, 2023",
id="date-en_US-custom",
),
pytest.param(
TEST_DATE_STRING,
"dd.MM.yyyy",
"de_DE",
"26.10.2023",
id="date-de_DE-custom",
),
# German weekday and month name translation
pytest.param(
TEST_DATE_STRING,
"EEEE",
"de_DE",
"Donnerstag",
id="weekday-de_DE",
),
pytest.param(
TEST_DATE_STRING,
"MMMM",
"de_DE",
"Oktober",
id="month-de_DE",
),
# French weekday and month name translation
pytest.param(
TEST_DATE_STRING,
"EEEE",
"fr_FR",
"jeudi",
id="weekday-fr_FR",
),
pytest.param(
TEST_DATE_STRING,
"MMMM",
"fr_FR",
"octobre",
id="month-fr_FR",
),
],
)
def test_localize_date_with_date_string(
self,
value: str,
format_style: str,
locale_str: str,
expected_output: str,
):
"""
Tests `localize_date` with `date` string across different locales and formats.
"""
assert localize_date(value, format_style, locale_str) == expected_output

View File

@@ -97,6 +97,12 @@ class TestArchiver(DirectoriesMixin, FileSystemAssertsMixin, TestCase):
class TestDecryptDocuments(FileSystemAssertsMixin, TestCase): class TestDecryptDocuments(FileSystemAssertsMixin, TestCase):
@override_settings(
ORIGINALS_DIR=(Path(__file__).parent / "samples" / "originals"),
THUMBNAIL_DIR=(Path(__file__).parent / "samples" / "thumb"),
PASSPHRASE="test",
FILENAME_FORMAT=None,
)
@mock.patch("documents.management.commands.decrypt_documents.input") @mock.patch("documents.management.commands.decrypt_documents.input")
def test_decrypt(self, m): def test_decrypt(self, m):
media_dir = tempfile.mkdtemp() media_dir = tempfile.mkdtemp()
@@ -105,12 +111,12 @@ class TestDecryptDocuments(FileSystemAssertsMixin, TestCase):
originals_dir.mkdir(parents=True, exist_ok=True) originals_dir.mkdir(parents=True, exist_ok=True)
thumb_dir.mkdir(parents=True, exist_ok=True) thumb_dir.mkdir(parents=True, exist_ok=True)
with override_settings( override_settings(
ORIGINALS_DIR=originals_dir, ORIGINALS_DIR=originals_dir,
THUMBNAIL_DIR=thumb_dir, THUMBNAIL_DIR=thumb_dir,
PASSPHRASE="test", PASSPHRASE="test",
FILENAME_FORMAT=None, ).enable()
):
doc = Document.objects.create( doc = Document.objects.create(
checksum="82186aaa94f0b98697d704b90fd1c072", checksum="82186aaa94f0b98697d704b90fd1c072",
title="wow", title="wow",

View File

@@ -1,8 +1,6 @@
import datetime
import shutil import shutil
import socket import socket
from datetime import timedelta from datetime import timedelta
from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from unittest import mock from unittest import mock
@@ -17,7 +15,6 @@ from guardian.shortcuts import get_users_with_perms
from httpx import HTTPError from httpx import HTTPError
from httpx import HTTPStatusError from httpx import HTTPStatusError
from pytest_httpx import HTTPXMock from pytest_httpx import HTTPXMock
from rest_framework.test import APIClient
from rest_framework.test import APITestCase from rest_framework.test import APITestCase
from documents.signals.handlers import run_workflows from documents.signals.handlers import run_workflows
@@ -25,7 +22,7 @@ from documents.signals.handlers import send_webhook
if TYPE_CHECKING: if TYPE_CHECKING:
from django.db.models import QuerySet from django.db.models import QuerySet
from pytest_django.fixtures import SettingsWrapper
from documents import tasks from documents import tasks
from documents.data_models import ConsumableDocument from documents.data_models import ConsumableDocument
@@ -125,7 +122,7 @@ class TestWorkflows(
filter_path=f"*/{self.dirs.scratch_dir.parts[-1]}/*", filter_path=f"*/{self.dirs.scratch_dir.parts[-1]}/*",
) )
action = WorkflowAction.objects.create( action = WorkflowAction.objects.create(
assign_title="Doc from {{correspondent}}", assign_title="Doc from {correspondent}",
assign_correspondent=self.c, assign_correspondent=self.c,
assign_document_type=self.dt, assign_document_type=self.dt,
assign_storage_path=self.sp, assign_storage_path=self.sp,
@@ -244,7 +241,7 @@ class TestWorkflows(
) )
action = WorkflowAction.objects.create( action = WorkflowAction.objects.create(
assign_title="Doc from {{correspondent}}", assign_title="Doc from {correspondent}",
assign_correspondent=self.c, assign_correspondent=self.c,
assign_document_type=self.dt, assign_document_type=self.dt,
assign_storage_path=self.sp, assign_storage_path=self.sp,
@@ -895,7 +892,7 @@ class TestWorkflows(
filter_filename="*sample*", filter_filename="*sample*",
) )
action = WorkflowAction.objects.create( action = WorkflowAction.objects.create(
assign_title="Doc created in {{created_year}}", assign_title="Doc created in {created_year}",
assign_correspondent=self.c2, assign_correspondent=self.c2,
assign_document_type=self.dt, assign_document_type=self.dt,
assign_storage_path=self.sp, assign_storage_path=self.sp,
@@ -1150,38 +1147,6 @@ class TestWorkflows(
expected_str = f"Document correspondent {doc.correspondent} does not match {trigger.filter_has_correspondent}" expected_str = f"Document correspondent {doc.correspondent} does not match {trigger.filter_has_correspondent}"
self.assertIn(expected_str, cm.output[1]) self.assertIn(expected_str, cm.output[1])
def test_document_added_no_match_storage_path(self):
trigger = WorkflowTrigger.objects.create(
type=WorkflowTrigger.WorkflowTriggerType.DOCUMENT_ADDED,
filter_has_storage_path=self.sp,
)
action = WorkflowAction.objects.create(
assign_title="Doc assign owner",
assign_owner=self.user2,
)
w = Workflow.objects.create(
name="Workflow 1",
order=0,
)
w.triggers.add(trigger)
w.actions.add(action)
w.save()
doc = Document.objects.create(
title="sample test",
original_filename="sample.pdf",
)
with self.assertLogs("paperless.matching", level="DEBUG") as cm:
document_consumption_finished.send(
sender=self.__class__,
document=doc,
)
expected_str = f"Document did not match {w}"
self.assertIn(expected_str, cm.output[0])
expected_str = f"Document storage path {doc.storage_path} does not match {trigger.filter_has_storage_path}"
self.assertIn(expected_str, cm.output[1])
def test_document_added_invalid_title_placeholders(self): def test_document_added_invalid_title_placeholders(self):
""" """
GIVEN: GIVEN:
@@ -1190,7 +1155,7 @@ class TestWorkflows(
WHEN: WHEN:
- File that matches is added - File that matches is added
THEN: THEN:
- Title is updated but the placeholder isn't replaced - Title is not updated, error is output
""" """
trigger = WorkflowTrigger.objects.create( trigger = WorkflowTrigger.objects.create(
type=WorkflowTrigger.WorkflowTriggerType.DOCUMENT_ADDED, type=WorkflowTrigger.WorkflowTriggerType.DOCUMENT_ADDED,
@@ -1216,12 +1181,15 @@ class TestWorkflows(
created=created, created=created,
) )
with self.assertLogs("paperless.handlers", level="ERROR") as cm:
document_consumption_finished.send( document_consumption_finished.send(
sender=self.__class__, sender=self.__class__,
document=doc, document=doc,
) )
expected_str = f"Error occurred parsing title assignment '{action.assign_title}', falling back to original"
self.assertIn(expected_str, cm.output[0])
self.assertEqual(doc.title, "Doc {created_year]") self.assertEqual(doc.title, "sample test")
def test_document_updated_workflow(self): def test_document_updated_workflow(self):
trigger = WorkflowTrigger.objects.create( trigger = WorkflowTrigger.objects.create(
@@ -1255,45 +1223,6 @@ class TestWorkflows(
self.assertEqual(doc.custom_fields.all().count(), 1) self.assertEqual(doc.custom_fields.all().count(), 1)
def test_document_consumption_workflow_month_placeholder_addded(self):
trigger = WorkflowTrigger.objects.create(
type=WorkflowTrigger.WorkflowTriggerType.CONSUMPTION,
sources=f"{DocumentSource.ApiUpload}",
filter_filename="simple*",
)
action = WorkflowAction.objects.create(
assign_title="Doc added in {{added_month_name_short}}",
)
w = Workflow.objects.create(
name="Workflow 1",
order=0,
)
w.triggers.add(trigger)
w.actions.add(action)
w.save()
superuser = User.objects.create_superuser("superuser")
self.client.force_authenticate(user=superuser)
test_file = shutil.copy(
self.SAMPLE_DIR / "simple.pdf",
self.dirs.scratch_dir / "simple.pdf",
)
with mock.patch("documents.tasks.ProgressManager", DummyProgressManager):
tasks.consume_file(
ConsumableDocument(
source=DocumentSource.ApiUpload,
original_file=test_file,
),
None,
)
document = Document.objects.first()
self.assertRegex(
document.title,
r"Doc added in \w{3,}",
) # Match any 3-letter month name
def test_document_updated_workflow_existing_custom_field(self): def test_document_updated_workflow_existing_custom_field(self):
""" """
GIVEN: GIVEN:
@@ -1848,7 +1777,6 @@ class TestWorkflows(
filter_filename="*sample*", filter_filename="*sample*",
filter_has_document_type=self.dt, filter_has_document_type=self.dt,
filter_has_correspondent=self.c, filter_has_correspondent=self.c,
filter_has_storage_path=self.sp,
) )
trigger.filter_has_tags.set([self.t1]) trigger.filter_has_tags.set([self.t1])
trigger.save() trigger.save()
@@ -1869,7 +1797,6 @@ class TestWorkflows(
title=f"sample test {i}", title=f"sample test {i}",
checksum=f"checksum{i}", checksum=f"checksum{i}",
correspondent=self.c, correspondent=self.c,
storage_path=self.sp,
original_filename=f"sample_{i}.pdf", original_filename=f"sample_{i}.pdf",
document_type=self.dt if i % 2 == 0 else None, document_type=self.dt if i % 2 == 0 else None,
) )
@@ -2108,7 +2035,7 @@ class TestWorkflows(
filter_filename="*simple*", filter_filename="*simple*",
) )
action = WorkflowAction.objects.create( action = WorkflowAction.objects.create(
assign_title="Doc from {{correspondent}}", assign_title="Doc from {correspondent}",
assign_correspondent=self.c, assign_correspondent=self.c,
assign_document_type=self.dt, assign_document_type=self.dt,
assign_storage_path=self.sp, assign_storage_path=self.sp,
@@ -2687,7 +2614,7 @@ class TestWorkflows(
) )
webhook_action = WorkflowActionWebhook.objects.create( webhook_action = WorkflowActionWebhook.objects.create(
use_params=False, use_params=False,
body="Test message: {{doc_url}}", body="Test message: {doc_url}",
url="http://paperless-ngx.com", url="http://paperless-ngx.com",
include_document=False, include_document=False,
) )
@@ -2746,7 +2673,7 @@ class TestWorkflows(
) )
webhook_action = WorkflowActionWebhook.objects.create( webhook_action = WorkflowActionWebhook.objects.create(
use_params=False, use_params=False,
body="Test message: {{doc_url}}", body="Test message: {doc_url}",
url="http://paperless-ngx.com", url="http://paperless-ngx.com",
include_document=True, include_document=True,
) )
@@ -3203,238 +3130,3 @@ class TestWebhookSecurity:
req = httpx_mock.get_request() req = httpx_mock.get_request()
assert req.headers["Host"] == "paperless-ngx.com" assert req.headers["Host"] == "paperless-ngx.com"
assert "evil.test" not in req.headers.get("Host", "") assert "evil.test" not in req.headers.get("Host", "")
@pytest.mark.django_db
class TestDateWorkflowLocalization(
SampleDirMixin,
):
"""Test cases for workflows that use date localization in templates."""
TEST_DATETIME = datetime.datetime(
2023,
6,
26,
14,
30,
5,
tzinfo=datetime.timezone.utc,
)
@pytest.mark.parametrize(
"title_template,expected_title",
[
pytest.param(
"Created at {{ created | localize_date('MMMM', 'es_ES') }}",
"Created at junio",
id="spanish_month",
),
pytest.param(
"Created at {{ created | localize_date('MMMM', 'de_DE') }}",
"Created at Juni", # codespell:ignore
id="german_month",
),
pytest.param(
"Created at {{ created | localize_date('dd/MM/yyyy', 'en_GB') }}",
"Created at 26/06/2023",
id="british_date_format",
),
],
)
def test_document_added_workflow_localization(
self,
title_template: str,
expected_title: str,
):
"""
GIVEN:
- Document added workflow with title template using localize_date filter
WHEN:
- Document is consumed
THEN:
- Document title is set with localized date
"""
trigger = WorkflowTrigger.objects.create(
type=WorkflowTrigger.WorkflowTriggerType.DOCUMENT_ADDED,
filter_filename="*sample*",
)
action = WorkflowAction.objects.create(
assign_title=title_template,
)
workflow = Workflow.objects.create(
name="Workflow 1",
order=0,
)
workflow.triggers.add(trigger)
workflow.actions.add(action)
workflow.save()
doc = Document.objects.create(
title="sample test",
correspondent=None,
original_filename="sample.pdf",
created=self.TEST_DATETIME,
)
document_consumption_finished.send(
sender=self.__class__,
document=doc,
)
doc.refresh_from_db()
assert doc.title == expected_title
@pytest.mark.parametrize(
"title_template,expected_title",
[
pytest.param(
"Created at {{ created | localize_date('MMMM', 'es_ES') }}",
"Created at junio",
id="spanish_month",
),
pytest.param(
"Created at {{ created | localize_date('MMMM', 'de_DE') }}",
"Created at Juni", # codespell:ignore
id="german_month",
),
pytest.param(
"Created at {{ created | localize_date('dd/MM/yyyy', 'en_GB') }}",
"Created at 26/06/2023",
id="british_date_format",
),
],
)
def test_document_updated_workflow_localization(
self,
title_template: str,
expected_title: str,
):
"""
GIVEN:
- Document updated workflow with title template using localize_date filter
WHEN:
- Document is updated via API
THEN:
- Document title is set with localized date
"""
# Setup test data
dt = DocumentType.objects.create(name="DocType Name")
c = Correspondent.objects.create(name="Correspondent Name")
client = APIClient()
superuser = User.objects.create_superuser("superuser")
client.force_authenticate(user=superuser)
trigger = WorkflowTrigger.objects.create(
type=WorkflowTrigger.WorkflowTriggerType.DOCUMENT_UPDATED,
filter_has_document_type=dt,
)
doc = Document.objects.create(
title="sample test",
correspondent=c,
original_filename="sample.pdf",
created=self.TEST_DATETIME,
)
action = WorkflowAction.objects.create(
assign_title=title_template,
)
workflow = Workflow.objects.create(
name="Workflow 1",
order=0,
)
workflow.triggers.add(trigger)
workflow.actions.add(action)
workflow.save()
client.patch(
f"/api/documents/{doc.id}/",
{"document_type": dt.id},
format="json",
)
doc.refresh_from_db()
assert doc.title == expected_title
@pytest.mark.parametrize(
"title_template,expected_title",
[
pytest.param(
"Added at {{ added | localize_date('MMMM', 'es_ES') }}",
"Added at junio",
id="spanish_month",
),
pytest.param(
"Added at {{ added | localize_date('MMMM', 'de_DE') }}",
"Added at Juni", # codespell:ignore
id="german_month",
),
pytest.param(
"Added at {{ added | localize_date('dd/MM/yyyy', 'en_GB') }}",
"Added at 26/06/2023",
id="british_date_format",
),
],
)
def test_document_consumption_workflow_localization(
self,
tmp_path: Path,
settings: SettingsWrapper,
title_template: str,
expected_title: str,
):
trigger = WorkflowTrigger.objects.create(
type=WorkflowTrigger.WorkflowTriggerType.CONSUMPTION,
sources=f"{DocumentSource.ApiUpload}",
filter_filename="simple*",
)
test_file = shutil.copy(
self.SAMPLE_DIR / "simple.pdf",
tmp_path / "simple.pdf",
)
action = WorkflowAction.objects.create(
assign_title=title_template,
)
w = Workflow.objects.create(
name="Workflow 1",
order=0,
)
w.triggers.add(trigger)
w.actions.add(action)
w.save()
(tmp_path / "scratch").mkdir(parents=True, exist_ok=True)
(tmp_path / "thumbnails").mkdir(parents=True, exist_ok=True)
# Temporarily override "now" for the environment so templates using
# added/created placeholders behave as if it's a different system date.
with (
mock.patch(
"documents.tasks.ProgressManager",
DummyProgressManager,
),
mock.patch(
"django.utils.timezone.now",
return_value=self.TEST_DATETIME,
),
override_settings(
SCRATCH_DIR=tmp_path / "scratch",
THUMBNAIL_DIR=tmp_path / "thumbnails",
),
):
tasks.consume_file(
ConsumableDocument(
source=DocumentSource.ApiUpload,
original_file=test_file,
),
None,
)
document = Document.objects.first()
assert document.title == expected_title

View File

@@ -2,7 +2,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: paperless-ngx\n" "Project-Id-Version: paperless-ngx\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2025-09-14 03:21+0000\n" "POT-Creation-Date: 2025-09-09 20:04+0000\n"
"PO-Revision-Date: 2022-02-17 04:17\n" "PO-Revision-Date: 2022-02-17 04:17\n"
"Last-Translator: \n" "Last-Translator: \n"
"Language-Team: English\n" "Language-Team: English\n"
@@ -21,39 +21,39 @@ msgstr ""
msgid "Documents" msgid "Documents"
msgstr "" msgstr ""
#: documents/filters.py:386 #: documents/filters.py:384
msgid "Value must be valid JSON." msgid "Value must be valid JSON."
msgstr "" msgstr ""
#: documents/filters.py:405 #: documents/filters.py:403
msgid "Invalid custom field query expression" msgid "Invalid custom field query expression"
msgstr "" msgstr ""
#: documents/filters.py:415 #: documents/filters.py:413
msgid "Invalid expression list. Must be nonempty." msgid "Invalid expression list. Must be nonempty."
msgstr "" msgstr ""
#: documents/filters.py:436 #: documents/filters.py:434
msgid "Invalid logical operator {op!r}" msgid "Invalid logical operator {op!r}"
msgstr "" msgstr ""
#: documents/filters.py:450 #: documents/filters.py:448
msgid "Maximum number of query conditions exceeded." msgid "Maximum number of query conditions exceeded."
msgstr "" msgstr ""
#: documents/filters.py:515 #: documents/filters.py:513
msgid "{name!r} is not a valid custom field." msgid "{name!r} is not a valid custom field."
msgstr "" msgstr ""
#: documents/filters.py:552 #: documents/filters.py:550
msgid "{data_type} does not support query expr {expr!r}." msgid "{data_type} does not support query expr {expr!r}."
msgstr "" msgstr ""
#: documents/filters.py:660 #: documents/filters.py:658
msgid "Maximum nesting depth exceeded." msgid "Maximum nesting depth exceeded."
msgstr "" msgstr ""
#: documents/filters.py:845 #: documents/filters.py:843
msgid "Custom field not found" msgid "Custom field not found"
msgstr "" msgstr ""
@@ -61,27 +61,27 @@ msgstr ""
msgid "owner" msgid "owner"
msgstr "" msgstr ""
#: documents/models.py:53 documents/models.py:950 #: documents/models.py:53 documents/models.py:946
msgid "None" msgid "None"
msgstr "" msgstr ""
#: documents/models.py:54 documents/models.py:951 #: documents/models.py:54 documents/models.py:947
msgid "Any word" msgid "Any word"
msgstr "" msgstr ""
#: documents/models.py:55 documents/models.py:952 #: documents/models.py:55 documents/models.py:948
msgid "All words" msgid "All words"
msgstr "" msgstr ""
#: documents/models.py:56 documents/models.py:953 #: documents/models.py:56 documents/models.py:949
msgid "Exact match" msgid "Exact match"
msgstr "" msgstr ""
#: documents/models.py:57 documents/models.py:954 #: documents/models.py:57 documents/models.py:950
msgid "Regular expression" msgid "Regular expression"
msgstr "" msgstr ""
#: documents/models.py:58 documents/models.py:955 #: documents/models.py:58 documents/models.py:951
msgid "Fuzzy word" msgid "Fuzzy word"
msgstr "" msgstr ""
@@ -89,20 +89,20 @@ msgstr ""
msgid "Automatic" msgid "Automatic"
msgstr "" msgstr ""
#: documents/models.py:62 documents/models.py:423 documents/models.py:1451 #: documents/models.py:62 documents/models.py:423 documents/models.py:1441
#: paperless_mail/models.py:23 paperless_mail/models.py:143 #: paperless_mail/models.py:23 paperless_mail/models.py:143
msgid "name" msgid "name"
msgstr "" msgstr ""
#: documents/models.py:64 documents/models.py:1019 #: documents/models.py:64 documents/models.py:1015
msgid "match" msgid "match"
msgstr "" msgstr ""
#: documents/models.py:67 documents/models.py:1022 #: documents/models.py:67 documents/models.py:1018
msgid "matching algorithm" msgid "matching algorithm"
msgstr "" msgstr ""
#: documents/models.py:72 documents/models.py:1027 #: documents/models.py:72 documents/models.py:1023
msgid "is insensitive" msgid "is insensitive"
msgstr "" msgstr ""
@@ -207,7 +207,7 @@ msgid "The number of pages of the document."
msgstr "" msgstr ""
#: documents/models.py:217 documents/models.py:655 documents/models.py:693 #: documents/models.py:217 documents/models.py:655 documents/models.py:693
#: documents/models.py:765 documents/models.py:824 #: documents/models.py:764 documents/models.py:822
msgid "created" msgid "created"
msgstr "" msgstr ""
@@ -256,7 +256,7 @@ msgid "The position of this document in your physical document archive."
msgstr "" msgstr ""
#: documents/models.py:294 documents/models.py:666 documents/models.py:720 #: documents/models.py:294 documents/models.py:666 documents/models.py:720
#: documents/models.py:1494 #: documents/models.py:1484
msgid "document" msgid "document"
msgstr "" msgstr ""
@@ -280,11 +280,11 @@ msgstr ""
msgid "Title" msgid "Title"
msgstr "" msgstr ""
#: documents/models.py:410 documents/models.py:971 #: documents/models.py:410 documents/models.py:967
msgid "Created" msgid "Created"
msgstr "" msgstr ""
#: documents/models.py:411 documents/models.py:970 #: documents/models.py:411 documents/models.py:966
msgid "Added" msgid "Added"
msgstr "" msgstr ""
@@ -752,434 +752,427 @@ msgstr ""
msgid "Select" msgid "Select"
msgstr "" msgstr ""
#: documents/models.py:762 #: documents/models.py:773
msgid "Long Text"
msgstr ""
#: documents/models.py:774
msgid "data type" msgid "data type"
msgstr "" msgstr ""
#: documents/models.py:781 #: documents/models.py:780
msgid "extra data" msgid "extra data"
msgstr "" msgstr ""
#: documents/models.py:785 #: documents/models.py:784
msgid "Extra data for the custom field, such as select options" msgid "Extra data for the custom field, such as select options"
msgstr "" msgstr ""
#: documents/models.py:791 #: documents/models.py:790
msgid "custom field" msgid "custom field"
msgstr "" msgstr ""
#: documents/models.py:792 #: documents/models.py:791
msgid "custom fields" msgid "custom fields"
msgstr "" msgstr ""
#: documents/models.py:892 #: documents/models.py:888
msgid "custom field instance" msgid "custom field instance"
msgstr "" msgstr ""
#: documents/models.py:893 #: documents/models.py:889
msgid "custom field instances" msgid "custom field instances"
msgstr "" msgstr ""
#: documents/models.py:958 #: documents/models.py:954
msgid "Consumption Started" msgid "Consumption Started"
msgstr "" msgstr ""
#: documents/models.py:959 #: documents/models.py:955
msgid "Document Added" msgid "Document Added"
msgstr "" msgstr ""
#: documents/models.py:960 #: documents/models.py:956
msgid "Document Updated" msgid "Document Updated"
msgstr "" msgstr ""
#: documents/models.py:961 #: documents/models.py:957
msgid "Scheduled" msgid "Scheduled"
msgstr "" msgstr ""
#: documents/models.py:964 #: documents/models.py:960
msgid "Consume Folder" msgid "Consume Folder"
msgstr "" msgstr ""
#: documents/models.py:965 #: documents/models.py:961
msgid "Api Upload" msgid "Api Upload"
msgstr "" msgstr ""
#: documents/models.py:966 #: documents/models.py:962
msgid "Mail Fetch" msgid "Mail Fetch"
msgstr "" msgstr ""
#: documents/models.py:967 #: documents/models.py:963
msgid "Web UI" msgid "Web UI"
msgstr "" msgstr ""
#: documents/models.py:972 #: documents/models.py:968
msgid "Modified" msgid "Modified"
msgstr "" msgstr ""
#: documents/models.py:973 #: documents/models.py:969
msgid "Custom Field" msgid "Custom Field"
msgstr "" msgstr ""
#: documents/models.py:976 #: documents/models.py:972
msgid "Workflow Trigger Type" msgid "Workflow Trigger Type"
msgstr "" msgstr ""
#: documents/models.py:988 #: documents/models.py:984
msgid "filter path" msgid "filter path"
msgstr "" msgstr ""
#: documents/models.py:993 #: documents/models.py:989
msgid "" msgid ""
"Only consume documents with a path that matches this if specified. Wildcards " "Only consume documents with a path that matches this if specified. Wildcards "
"specified as * are allowed. Case insensitive." "specified as * are allowed. Case insensitive."
msgstr "" msgstr ""
#: documents/models.py:1000 #: documents/models.py:996
msgid "filter filename" msgid "filter filename"
msgstr "" msgstr ""
#: documents/models.py:1005 paperless_mail/models.py:200 #: documents/models.py:1001 paperless_mail/models.py:200
msgid "" msgid ""
"Only consume documents which entirely match this filename if specified. " "Only consume documents which entirely match this filename if specified. "
"Wildcards such as *.pdf or *invoice* are allowed. Case insensitive." "Wildcards such as *.pdf or *invoice* are allowed. Case insensitive."
msgstr "" msgstr ""
#: documents/models.py:1016 #: documents/models.py:1012
msgid "filter documents from this mail rule" msgid "filter documents from this mail rule"
msgstr "" msgstr ""
#: documents/models.py:1032 #: documents/models.py:1028
msgid "has these tag(s)" msgid "has these tag(s)"
msgstr "" msgstr ""
#: documents/models.py:1040 #: documents/models.py:1036
msgid "has this document type" msgid "has this document type"
msgstr "" msgstr ""
#: documents/models.py:1048 #: documents/models.py:1044
msgid "has this correspondent" msgid "has this correspondent"
msgstr "" msgstr ""
#: documents/models.py:1056 #: documents/models.py:1048
msgid "has this storage path"
msgstr ""
#: documents/models.py:1060
msgid "schedule offset days" msgid "schedule offset days"
msgstr "" msgstr ""
#: documents/models.py:1063 #: documents/models.py:1051
msgid "The number of days to offset the schedule trigger by." msgid "The number of days to offset the schedule trigger by."
msgstr "" msgstr ""
#: documents/models.py:1068 #: documents/models.py:1056
msgid "schedule is recurring" msgid "schedule is recurring"
msgstr "" msgstr ""
#: documents/models.py:1071 #: documents/models.py:1059
msgid "If the schedule should be recurring." msgid "If the schedule should be recurring."
msgstr "" msgstr ""
#: documents/models.py:1076 #: documents/models.py:1064
msgid "schedule recurring delay in days" msgid "schedule recurring delay in days"
msgstr "" msgstr ""
#: documents/models.py:1080 #: documents/models.py:1068
msgid "The number of days between recurring schedule triggers." msgid "The number of days between recurring schedule triggers."
msgstr "" msgstr ""
#: documents/models.py:1085 #: documents/models.py:1073
msgid "schedule date field" msgid "schedule date field"
msgstr "" msgstr ""
#: documents/models.py:1090 #: documents/models.py:1078
msgid "The field to check for a schedule trigger." msgid "The field to check for a schedule trigger."
msgstr "" msgstr ""
#: documents/models.py:1099 #: documents/models.py:1087
msgid "schedule date custom field" msgid "schedule date custom field"
msgstr "" msgstr ""
#: documents/models.py:1103 #: documents/models.py:1091
msgid "workflow trigger" msgid "workflow trigger"
msgstr "" msgstr ""
#: documents/models.py:1104 #: documents/models.py:1092
msgid "workflow triggers" msgid "workflow triggers"
msgstr "" msgstr ""
#: documents/models.py:1112 #: documents/models.py:1100
msgid "email subject" msgid "email subject"
msgstr "" msgstr ""
#: documents/models.py:1116 #: documents/models.py:1104
msgid "" msgid ""
"The subject of the email, can include some placeholders, see documentation." "The subject of the email, can include some placeholders, see documentation."
msgstr "" msgstr ""
#: documents/models.py:1122 #: documents/models.py:1110
msgid "email body" msgid "email body"
msgstr "" msgstr ""
#: documents/models.py:1125 #: documents/models.py:1113
msgid "" msgid ""
"The body (message) of the email, can include some placeholders, see " "The body (message) of the email, can include some placeholders, see "
"documentation." "documentation."
msgstr "" msgstr ""
#: documents/models.py:1131 #: documents/models.py:1119
msgid "emails to" msgid "emails to"
msgstr "" msgstr ""
#: documents/models.py:1134 #: documents/models.py:1122
msgid "The destination email addresses, comma separated." msgid "The destination email addresses, comma separated."
msgstr "" msgstr ""
#: documents/models.py:1140 #: documents/models.py:1128
msgid "include document in email" msgid "include document in email"
msgstr "" msgstr ""
#: documents/models.py:1151 #: documents/models.py:1139
msgid "webhook url" msgid "webhook url"
msgstr "" msgstr ""
#: documents/models.py:1154 #: documents/models.py:1142
msgid "The destination URL for the notification." msgid "The destination URL for the notification."
msgstr "" msgstr ""
#: documents/models.py:1159 #: documents/models.py:1147
msgid "use parameters" msgid "use parameters"
msgstr "" msgstr ""
#: documents/models.py:1164 #: documents/models.py:1152
msgid "send as JSON" msgid "send as JSON"
msgstr "" msgstr ""
#: documents/models.py:1168 #: documents/models.py:1156
msgid "webhook parameters" msgid "webhook parameters"
msgstr "" msgstr ""
#: documents/models.py:1171 #: documents/models.py:1159
msgid "The parameters to send with the webhook URL if body not used." msgid "The parameters to send with the webhook URL if body not used."
msgstr "" msgstr ""
#: documents/models.py:1175 #: documents/models.py:1163
msgid "webhook body" msgid "webhook body"
msgstr "" msgstr ""
#: documents/models.py:1178 #: documents/models.py:1166
msgid "The body to send with the webhook URL if parameters not used." msgid "The body to send with the webhook URL if parameters not used."
msgstr "" msgstr ""
#: documents/models.py:1182 #: documents/models.py:1170
msgid "webhook headers" msgid "webhook headers"
msgstr "" msgstr ""
#: documents/models.py:1185 #: documents/models.py:1173
msgid "The headers to send with the webhook URL." msgid "The headers to send with the webhook URL."
msgstr "" msgstr ""
#: documents/models.py:1190 #: documents/models.py:1178
msgid "include document in webhook" msgid "include document in webhook"
msgstr "" msgstr ""
#: documents/models.py:1201 #: documents/models.py:1189
msgid "Assignment" msgid "Assignment"
msgstr "" msgstr ""
#: documents/models.py:1205 #: documents/models.py:1193
msgid "Removal" msgid "Removal"
msgstr "" msgstr ""
#: documents/models.py:1209 documents/templates/account/password_reset.html:15 #: documents/models.py:1197 documents/templates/account/password_reset.html:15
msgid "Email" msgid "Email"
msgstr "" msgstr ""
#: documents/models.py:1213 #: documents/models.py:1201
msgid "Webhook" msgid "Webhook"
msgstr "" msgstr ""
#: documents/models.py:1217 #: documents/models.py:1205
msgid "Workflow Action Type" msgid "Workflow Action Type"
msgstr "" msgstr ""
#: documents/models.py:1223 #: documents/models.py:1211
msgid "assign title" msgid "assign title"
msgstr "" msgstr ""
#: documents/models.py:1227 #: documents/models.py:1216
msgid "Assign a document title, must be a Jinja2 template, see documentation." msgid ""
"Assign a document title, can include some placeholders, see documentation."
msgstr "" msgstr ""
#: documents/models.py:1235 paperless_mail/models.py:274 #: documents/models.py:1225 paperless_mail/models.py:274
msgid "assign this tag" msgid "assign this tag"
msgstr "" msgstr ""
#: documents/models.py:1244 paperless_mail/models.py:282 #: documents/models.py:1234 paperless_mail/models.py:282
msgid "assign this document type" msgid "assign this document type"
msgstr "" msgstr ""
#: documents/models.py:1253 paperless_mail/models.py:296 #: documents/models.py:1243 paperless_mail/models.py:296
msgid "assign this correspondent" msgid "assign this correspondent"
msgstr "" msgstr ""
#: documents/models.py:1262 #: documents/models.py:1252
msgid "assign this storage path" msgid "assign this storage path"
msgstr "" msgstr ""
#: documents/models.py:1271 #: documents/models.py:1261
msgid "assign this owner" msgid "assign this owner"
msgstr "" msgstr ""
#: documents/models.py:1278 #: documents/models.py:1268
msgid "grant view permissions to these users" msgid "grant view permissions to these users"
msgstr "" msgstr ""
#: documents/models.py:1285 #: documents/models.py:1275
msgid "grant view permissions to these groups" msgid "grant view permissions to these groups"
msgstr "" msgstr ""
#: documents/models.py:1292 #: documents/models.py:1282
msgid "grant change permissions to these users" msgid "grant change permissions to these users"
msgstr "" msgstr ""
#: documents/models.py:1299 #: documents/models.py:1289
msgid "grant change permissions to these groups" msgid "grant change permissions to these groups"
msgstr "" msgstr ""
#: documents/models.py:1306 #: documents/models.py:1296
msgid "assign these custom fields" msgid "assign these custom fields"
msgstr "" msgstr ""
#: documents/models.py:1310 #: documents/models.py:1300
msgid "custom field values" msgid "custom field values"
msgstr "" msgstr ""
#: documents/models.py:1314 #: documents/models.py:1304
msgid "Optional values to assign to the custom fields." msgid "Optional values to assign to the custom fields."
msgstr "" msgstr ""
#: documents/models.py:1323 #: documents/models.py:1313
msgid "remove these tag(s)" msgid "remove these tag(s)"
msgstr "" msgstr ""
#: documents/models.py:1328 #: documents/models.py:1318
msgid "remove all tags" msgid "remove all tags"
msgstr "" msgstr ""
#: documents/models.py:1335 #: documents/models.py:1325
msgid "remove these document type(s)" msgid "remove these document type(s)"
msgstr "" msgstr ""
#: documents/models.py:1340 #: documents/models.py:1330
msgid "remove all document types" msgid "remove all document types"
msgstr "" msgstr ""
#: documents/models.py:1347 #: documents/models.py:1337
msgid "remove these correspondent(s)" msgid "remove these correspondent(s)"
msgstr "" msgstr ""
#: documents/models.py:1352 #: documents/models.py:1342
msgid "remove all correspondents" msgid "remove all correspondents"
msgstr "" msgstr ""
#: documents/models.py:1359 #: documents/models.py:1349
msgid "remove these storage path(s)" msgid "remove these storage path(s)"
msgstr "" msgstr ""
#: documents/models.py:1364 #: documents/models.py:1354
msgid "remove all storage paths" msgid "remove all storage paths"
msgstr "" msgstr ""
#: documents/models.py:1371 #: documents/models.py:1361
msgid "remove these owner(s)" msgid "remove these owner(s)"
msgstr "" msgstr ""
#: documents/models.py:1376 #: documents/models.py:1366
msgid "remove all owners" msgid "remove all owners"
msgstr "" msgstr ""
#: documents/models.py:1383 #: documents/models.py:1373
msgid "remove view permissions for these users" msgid "remove view permissions for these users"
msgstr "" msgstr ""
#: documents/models.py:1390 #: documents/models.py:1380
msgid "remove view permissions for these groups" msgid "remove view permissions for these groups"
msgstr "" msgstr ""
#: documents/models.py:1397 #: documents/models.py:1387
msgid "remove change permissions for these users" msgid "remove change permissions for these users"
msgstr "" msgstr ""
#: documents/models.py:1404 #: documents/models.py:1394
msgid "remove change permissions for these groups" msgid "remove change permissions for these groups"
msgstr "" msgstr ""
#: documents/models.py:1409 #: documents/models.py:1399
msgid "remove all permissions" msgid "remove all permissions"
msgstr "" msgstr ""
#: documents/models.py:1416 #: documents/models.py:1406
msgid "remove these custom fields" msgid "remove these custom fields"
msgstr "" msgstr ""
#: documents/models.py:1421 #: documents/models.py:1411
msgid "remove all custom fields" msgid "remove all custom fields"
msgstr "" msgstr ""
#: documents/models.py:1430 #: documents/models.py:1420
msgid "email" msgid "email"
msgstr "" msgstr ""
#: documents/models.py:1439 #: documents/models.py:1429
msgid "webhook" msgid "webhook"
msgstr "" msgstr ""
#: documents/models.py:1443 #: documents/models.py:1433
msgid "workflow action" msgid "workflow action"
msgstr "" msgstr ""
#: documents/models.py:1444 #: documents/models.py:1434
msgid "workflow actions" msgid "workflow actions"
msgstr "" msgstr ""
#: documents/models.py:1453 paperless_mail/models.py:145 #: documents/models.py:1443 paperless_mail/models.py:145
msgid "order" msgid "order"
msgstr "" msgstr ""
#: documents/models.py:1459 #: documents/models.py:1449
msgid "triggers" msgid "triggers"
msgstr "" msgstr ""
#: documents/models.py:1466 #: documents/models.py:1456
msgid "actions" msgid "actions"
msgstr "" msgstr ""
#: documents/models.py:1469 paperless_mail/models.py:154 #: documents/models.py:1459 paperless_mail/models.py:154
msgid "enabled" msgid "enabled"
msgstr "" msgstr ""
#: documents/models.py:1480 #: documents/models.py:1470
msgid "workflow" msgid "workflow"
msgstr "" msgstr ""
#: documents/models.py:1484 #: documents/models.py:1474
msgid "workflow trigger type" msgid "workflow trigger type"
msgstr "" msgstr ""
#: documents/models.py:1498 #: documents/models.py:1488
msgid "date run" msgid "date run"
msgstr "" msgstr ""
#: documents/models.py:1504 #: documents/models.py:1494
msgid "workflow run" msgid "workflow run"
msgstr "" msgstr ""
#: documents/models.py:1505 #: documents/models.py:1495
msgid "workflow runs" msgid "workflow runs"
msgstr "" msgstr ""

View File

@@ -322,7 +322,6 @@ INSTALLED_APPS = [
"paperless_tesseract.apps.PaperlessTesseractConfig", "paperless_tesseract.apps.PaperlessTesseractConfig",
"paperless_text.apps.PaperlessTextConfig", "paperless_text.apps.PaperlessTextConfig",
"paperless_mail.apps.PaperlessMailConfig", "paperless_mail.apps.PaperlessMailConfig",
"paperless_remote.apps.PaperlessRemoteParserConfig",
"django.contrib.admin", "django.contrib.admin",
"rest_framework", "rest_framework",
"rest_framework.authtoken", "rest_framework.authtoken",
@@ -1389,10 +1388,3 @@ WEBHOOKS_ALLOW_INTERNAL_REQUESTS = __get_boolean(
"PAPERLESS_WEBHOOKS_ALLOW_INTERNAL_REQUESTS", "PAPERLESS_WEBHOOKS_ALLOW_INTERNAL_REQUESTS",
"true", "true",
) )
###############################################################################
# Remote Parser #
###############################################################################
REMOTE_OCR_ENGINE = os.getenv("PAPERLESS_REMOTE_OCR_ENGINE")
REMOTE_OCR_API_KEY = os.getenv("PAPERLESS_REMOTE_OCR_API_KEY")
REMOTE_OCR_ENDPOINT = os.getenv("PAPERLESS_REMOTE_OCR_ENDPOINT")

View File

@@ -1,4 +0,0 @@
# this is here so that django finds the checks.
from paperless_remote.checks import check_remote_parser_configured
__all__ = ["check_remote_parser_configured"]

View File

@@ -1,14 +0,0 @@
from django.apps import AppConfig
from paperless_remote.signals import remote_consumer_declaration
class PaperlessRemoteParserConfig(AppConfig):
name = "paperless_remote"
def ready(self):
from documents.signals import document_consumer_declaration
document_consumer_declaration.connect(remote_consumer_declaration)
AppConfig.ready(self)

View File

@@ -1,17 +0,0 @@
from django.conf import settings
from django.core.checks import Error
from django.core.checks import register
@register()
def check_remote_parser_configured(app_configs, **kwargs):
if settings.REMOTE_OCR_ENGINE == "azureai" and not (
settings.REMOTE_OCR_ENDPOINT and settings.REMOTE_OCR_API_KEY
):
return [
Error(
"Azure AI remote parser requires endpoint and API key to be configured.",
),
]
return []

View File

@@ -1,114 +0,0 @@
from pathlib import Path
from django.conf import settings
from paperless_tesseract.parsers import RasterisedDocumentParser
class RemoteEngineConfig:
def __init__(
self,
engine: str,
api_key: str | None = None,
endpoint: str | None = None,
):
self.engine = engine
self.api_key = api_key
self.endpoint = endpoint
def engine_is_valid(self):
valid = self.engine in ["azureai"] and self.api_key is not None
if self.engine == "azureai":
valid = valid and self.endpoint is not None
return valid
class RemoteDocumentParser(RasterisedDocumentParser):
"""
This parser uses a remote OCR engine to parse documents. Currently, it supports Azure AI Vision
as this is the only service that provides a remote OCR API with text-embedded PDF output.
"""
logging_name = "paperless.parsing.remote"
def get_settings(self) -> RemoteEngineConfig:
"""
Returns the configuration for the remote OCR engine, loaded from Django settings.
"""
return RemoteEngineConfig(
engine=settings.REMOTE_OCR_ENGINE,
api_key=settings.REMOTE_OCR_API_KEY,
endpoint=settings.REMOTE_OCR_ENDPOINT,
)
def supported_mime_types(self):
if self.settings.engine_is_valid():
return {
"application/pdf": ".pdf",
"image/png": ".png",
"image/jpeg": ".jpg",
"image/tiff": ".tiff",
"image/bmp": ".bmp",
"image/gif": ".gif",
"image/webp": ".webp",
}
else:
return {}
def azure_ai_vision_parse(
self,
file: Path,
) -> str | None:
"""
Uses Azure AI Vision to parse the document and return the text content.
It requests a searchable PDF output with embedded text.
The PDF is saved to the archive_path attribute.
Returns the text content extracted from the document.
If the parsing fails, it returns None.
"""
from azure.ai.documentintelligence import DocumentIntelligenceClient
from azure.ai.documentintelligence.models import AnalyzeDocumentRequest
from azure.ai.documentintelligence.models import AnalyzeOutputOption
from azure.ai.documentintelligence.models import DocumentContentFormat
from azure.core.credentials import AzureKeyCredential
client = DocumentIntelligenceClient(
endpoint=self.settings.endpoint,
credential=AzureKeyCredential(self.settings.api_key),
)
with file.open("rb") as f:
analyze_request = AnalyzeDocumentRequest(bytes_source=f.read())
poller = client.begin_analyze_document(
model_id="prebuilt-read",
body=analyze_request,
output_content_format=DocumentContentFormat.TEXT,
output=[AnalyzeOutputOption.PDF], # request searchable PDF output
content_type="application/json",
)
poller.wait()
result_id = poller.details["operation_id"]
result = poller.result()
# Download the PDF with embedded text
self.archive_path = self.tempdir / "archive.pdf"
with self.archive_path.open("wb") as f:
for chunk in client.get_analyze_result_pdf(
model_id="prebuilt-read",
result_id=result_id,
):
f.write(chunk)
client.close()
return result.content
def parse(self, document_path: Path, mime_type, file_name=None):
if not self.settings.engine_is_valid():
self.log.warning(
"No valid remote parser engine is configured, content will be empty.",
)
self.text = ""
return
elif self.settings.engine == "azureai":
self.text = self.azure_ai_vision_parse(document_path)

View File

@@ -1,18 +0,0 @@
def get_parser(*args, **kwargs):
from paperless_remote.parsers import RemoteDocumentParser
return RemoteDocumentParser(*args, **kwargs)
def get_supported_mime_types():
from paperless_remote.parsers import RemoteDocumentParser
return RemoteDocumentParser(None).supported_mime_types()
def remote_consumer_declaration(sender, **kwargs):
return {
"parser": get_parser,
"weight": 5,
"mime_types": get_supported_mime_types(),
}

View File

@@ -1,30 +0,0 @@
from unittest import TestCase
from django.test import override_settings
from paperless_remote import check_remote_parser_configured
class TestChecks(TestCase):
@override_settings(REMOTE_OCR_ENGINE=None)
def test_no_engine(self):
msgs = check_remote_parser_configured(None)
self.assertEqual(len(msgs), 0)
@override_settings(REMOTE_OCR_ENGINE="azureai")
@override_settings(REMOTE_OCR_API_KEY="somekey")
@override_settings(REMOTE_OCR_ENDPOINT=None)
def test_azure_no_endpoint(self):
msgs = check_remote_parser_configured(None)
self.assertEqual(len(msgs), 1)
self.assertTrue(
msgs[0].msg.startswith(
"Azure AI remote parser requires endpoint and API key to be configured.",
),
)
@override_settings(REMOTE_OCR_ENGINE="something")
@override_settings(REMOTE_OCR_API_KEY="somekey")
def test_valid_configuration(self):
msgs = check_remote_parser_configured(None)
self.assertEqual(len(msgs), 0)

View File

@@ -1,101 +0,0 @@
import uuid
from pathlib import Path
from unittest import mock
from django.test import TestCase
from django.test import override_settings
from documents.tests.utils import DirectoriesMixin
from documents.tests.utils import FileSystemAssertsMixin
from paperless_remote.parsers import RemoteDocumentParser
from paperless_remote.signals import get_parser
class TestParser(DirectoriesMixin, FileSystemAssertsMixin, TestCase):
SAMPLE_FILES = Path(__file__).resolve().parent / "samples"
def assertContainsStrings(self, content: str, strings: list[str]):
# Asserts that all strings appear in content, in the given order.
indices = []
for s in strings:
if s in content:
indices.append(content.index(s))
else:
self.fail(f"'{s}' is not in '{content}'")
self.assertListEqual(indices, sorted(indices))
@mock.patch("paperless_tesseract.parsers.run_subprocess")
@mock.patch("azure.ai.documentintelligence.DocumentIntelligenceClient")
def test_get_text_with_azure(self, mock_client_cls, mock_subprocess):
# Arrange mock Azure client
mock_client = mock.Mock()
mock_client_cls.return_value = mock_client
# Simulate poller result and its `.details`
mock_poller = mock.Mock()
mock_poller.wait.return_value = None
mock_poller.details = {"operation_id": "fake-op-id"}
mock_client.begin_analyze_document.return_value = mock_poller
mock_poller.result.return_value.content = "This is a test document."
# Return dummy PDF bytes
mock_client.get_analyze_result_pdf.return_value = [
b"%PDF-",
b"1.7 ",
b"FAKEPDF",
]
# Simulate pdftotext by writing dummy text to sidecar file
def fake_run(cmd, *args, **kwargs):
with Path(cmd[-1]).open("w", encoding="utf-8") as f:
f.write("This is a test document.")
mock_subprocess.side_effect = fake_run
with override_settings(
REMOTE_OCR_ENGINE="azureai",
REMOTE_OCR_API_KEY="somekey",
REMOTE_OCR_ENDPOINT="https://endpoint.cognitiveservices.azure.com",
):
parser = get_parser(uuid.uuid4())
parser.parse(
self.SAMPLE_FILES / "simple-digital.pdf",
"application/pdf",
)
self.assertContainsStrings(
parser.text.strip(),
["This is a test document."],
)
@override_settings(
REMOTE_OCR_ENGINE="azureai",
REMOTE_OCR_API_KEY="key",
REMOTE_OCR_ENDPOINT="https://endpoint.cognitiveservices.azure.com",
)
def test_supported_mime_types_valid_config(self):
parser = RemoteDocumentParser(uuid.uuid4())
expected_types = {
"application/pdf": ".pdf",
"image/png": ".png",
"image/jpeg": ".jpg",
"image/tiff": ".tiff",
"image/bmp": ".bmp",
"image/gif": ".gif",
"image/webp": ".webp",
}
self.assertEqual(parser.supported_mime_types(), expected_types)
def test_supported_mime_types_invalid_config(self):
parser = get_parser(uuid.uuid4())
self.assertEqual(parser.supported_mime_types(), {})
@override_settings(
REMOTE_OCR_ENGINE=None,
REMOTE_OCR_API_KEY=None,
REMOTE_OCR_ENDPOINT=None,
)
def test_parse_with_invalid_config(self):
parser = get_parser(uuid.uuid4())
parser.parse(self.SAMPLE_FILES / "simple-digital.pdf", "application/pdf")
self.assertEqual(parser.text, "")

39
uv.lock generated
View File

@@ -95,34 +95,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/af/cc/55a32a2c98022d88812b5986d2a92c4ff3ee087e83b712ebc703bba452bf/Automat-24.8.1-py3-none-any.whl", hash = "sha256:bf029a7bc3da1e2c24da2343e7598affaa9f10bf0ab63ff808566ce90551e02a", size = 42585, upload-time = "2024-08-19T17:31:56.729Z" }, { url = "https://files.pythonhosted.org/packages/af/cc/55a32a2c98022d88812b5986d2a92c4ff3ee087e83b712ebc703bba452bf/Automat-24.8.1-py3-none-any.whl", hash = "sha256:bf029a7bc3da1e2c24da2343e7598affaa9f10bf0ab63ff808566ce90551e02a", size = 42585, upload-time = "2024-08-19T17:31:56.729Z" },
] ]
[[package]]
name = "azure-ai-documentintelligence"
version = "1.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "azure-core", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
{ name = "isodate", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
{ name = "typing-extensions", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/44/7b/8115cd713e2caa5e44def85f2b7ebd02a74ae74d7113ba20bdd41fd6dd80/azure_ai_documentintelligence-1.0.2.tar.gz", hash = "sha256:4d75a2513f2839365ebabc0e0e1772f5601b3a8c9a71e75da12440da13b63484", size = 170940 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d9/75/c9ec040f23082f54ffb1977ff8f364c2d21c79a640a13d1c1809e7fd6b1a/azure_ai_documentintelligence-1.0.2-py3-none-any.whl", hash = "sha256:e1fb446abbdeccc9759d897898a0fe13141ed29f9ad11fc705f951925822ed59", size = 106005 },
]
[[package]]
name = "azure-core"
version = "1.33.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "requests", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
{ name = "six", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
{ name = "typing-extensions", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/75/aa/7c9db8edd626f1a7d99d09ef7926f6f4fb34d5f9fa00dc394afdfe8e2a80/azure_core-1.33.0.tar.gz", hash = "sha256:f367aa07b5e3005fec2c1e184b882b0b039910733907d001c20fb08ebb8c0eb9", size = 295633 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/07/b7/76b7e144aa53bd206bf1ce34fa75350472c3f69bf30e5c8c18bc9881035d/azure_core-1.33.0-py3-none-any.whl", hash = "sha256:9b5b6d0223a1d38c37500e6971118c1e0f13f54951e6893968b38910bc9cda8f", size = 207071 },
]
[[package]] [[package]]
name = "babel" name = "babel"
version = "2.17.0" version = "2.17.0"
@@ -1431,15 +1403,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/fc/4e5a141c3f7c7bed550ac1f69e599e92b6be449dd4677ec09f325cad0955/inotifyrecursive-0.3.5-py3-none-any.whl", hash = "sha256:7e5f4a2e1dc2bef0efa3b5f6b339c41fb4599055a2b54909d020e9e932cc8d2f", size = 8009, upload-time = "2020-11-20T12:38:46.981Z" }, { url = "https://files.pythonhosted.org/packages/c7/fc/4e5a141c3f7c7bed550ac1f69e599e92b6be449dd4677ec09f325cad0955/inotifyrecursive-0.3.5-py3-none-any.whl", hash = "sha256:7e5f4a2e1dc2bef0efa3b5f6b339c41fb4599055a2b54909d020e9e932cc8d2f", size = 8009, upload-time = "2020-11-20T12:38:46.981Z" },
] ]
[[package]]
name = "isodate"
version = "0.7.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/54/4d/e940025e2ce31a8ce1202635910747e5a87cc3a6a6bb2d00973375014749/isodate-0.7.2.tar.gz", hash = "sha256:4cd1aa0f43ca76f4a6c6c0292a85f40b35ec2e43e315b59f06e6d32171a953e6", size = 29705 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/15/aa/0aca39a37d3c7eb941ba736ede56d689e7be91cab5d9ca846bde3999eba6/isodate-0.7.2-py3-none-any.whl", hash = "sha256:28009937d8031054830160fce6d409ed342816b543597cece116d966c6d99e15", size = 22320 },
]
[[package]] [[package]]
name = "jinja2" name = "jinja2"
version = "3.1.6" version = "3.1.6"
@@ -2060,7 +2023,6 @@ name = "paperless-ngx"
version = "2.18.4" version = "2.18.4"
source = { virtual = "." } source = { virtual = "." }
dependencies = [ dependencies = [
{ name = "azure-ai-documentintelligence", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
{ name = "babel", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" }, { name = "babel", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
{ name = "bleach", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" }, { name = "bleach", marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
{ name = "celery", extra = ["redis"], marker = "sys_platform == 'darwin' or sys_platform == 'linux'" }, { name = "celery", extra = ["redis"], marker = "sys_platform == 'darwin' or sys_platform == 'linux'" },
@@ -2197,7 +2159,6 @@ typing = [
[package.metadata] [package.metadata]
requires-dist = [ requires-dist = [
{ name = "azure-ai-documentintelligence", specifier = ">=1.0.2" },
{ name = "babel", specifier = ">=2.17" }, { name = "babel", specifier = ">=2.17" },
{ name = "bleach", specifier = "~=6.2.0" }, { name = "bleach", specifier = "~=6.2.0" },
{ name = "celery", extras = ["redis"], specifier = "~=5.5.1" }, { name = "celery", extras = ["redis"], specifier = "~=5.5.1" },