Compare commits

..

10 Commits

Author SHA1 Message Date
shamoon
a759f974c4 Merge branch 'dev' into feature-better-asn-operations 2026-01-29 11:24:18 -08:00
GitHub Actions
a367b8ad1c Auto translate strings 2026-01-29 16:07:32 +00:00
Christoph Schober
d16d3fb618 Feature: support split documents based on tag barcodes (#11645) 2026-01-29 08:05:33 -08:00
Trenton H
5577f70c69 Chore: Enables pylance pytest integration, swaps around some test markers (#11930) 2026-01-28 23:06:11 +00:00
Philipp Defner
4d9aa2e943 Expose port to host (#11918)
---------

Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-01-28 21:51:10 +00:00
shamoon
0a40c0de0c Merge branch 'dev' into feature-better-asn-operations 2026-01-27 17:03:40 -08:00
shamoon
5fe46cac55 Mas testing 2026-01-24 20:05:25 -08:00
shamoon
c0c2202564 skip_asn_if_exists 2026-01-24 20:05:25 -08:00
shamoon
d65d9a2b88 "Handoff" ASN when merging or editing PDFs 2026-01-24 20:05:25 -08:00
shamoon
8e12f3e93c First, release ASNs before document replacement (and restore if needed) 2026-01-24 20:05:25 -08:00
39 changed files with 1923 additions and 1283 deletions

View File

@@ -3,6 +3,7 @@
"dockerComposeFile": "docker-compose.devcontainer.sqlite-tika.yml", "dockerComposeFile": "docker-compose.devcontainer.sqlite-tika.yml",
"service": "paperless-development", "service": "paperless-development",
"workspaceFolder": "/usr/src/paperless/paperless-ngx", "workspaceFolder": "/usr/src/paperless/paperless-ngx",
"forwardPorts": [4200, 8000],
"containerEnv": { "containerEnv": {
"UV_CACHE_DIR": "/usr/src/paperless/paperless-ngx/.uv-cache" "UV_CACHE_DIR": "/usr/src/paperless/paperless-ngx/.uv-cache"
}, },

View File

@@ -33,7 +33,7 @@
"label": "Start: Frontend Angular", "label": "Start: Frontend Angular",
"description": "Start the Frontend Angular Dev Server", "description": "Start the Frontend Angular Dev Server",
"type": "shell", "type": "shell",
"command": "pnpm start", "command": "pnpm exec ng serve --host 0.0.0.0",
"isBackground": true, "isBackground": true,
"options": { "options": {
"cwd": "${workspaceFolder}/src-ui" "cwd": "${workspaceFolder}/src-ui"

View File

@@ -805,6 +805,27 @@ See the relevant settings [`PAPERLESS_CONSUMER_ENABLE_TAG_BARCODE`](configuratio
and [`PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING`](configuration.md#PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING) and [`PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING`](configuration.md#PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING)
for more information. for more information.
#### Splitting on Tag Barcodes
By default, tag barcodes only assign tags to documents without splitting them. However,
you can enable document splitting on tag barcodes by setting
[`PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT`](configuration.md#PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT)
to `true`.
When enabled, documents will be split at pages containing tag barcodes, similar to how
ASN barcodes work. Key features:
- The page with the tag barcode is **retained** in the resulting document
- **Each split document extracts its own tags** - only tags on pages within that document are assigned
- Multiple tag barcodes can trigger multiple splits in the same document
- Works seamlessly with ASN barcodes - each split document gets its own ASN and tags
This is useful for batch scanning where you place tag barcode pages between different
documents to both separate and categorize them in a single operation.
**Example:** A 6-page scan with TAG:invoice on page 3 and TAG:receipt on page 5 will create
three documents: pages 1-2 (no tags), pages 3-4 (tagged "invoice"), and pages 5-6 (tagged "receipt").
## Automatic collation of double-sided documents {#collate} ## Automatic collation of double-sided documents {#collate}
!!! note !!! note

View File

@@ -1557,6 +1557,20 @@ assigns or creates tags if a properly formatted barcode is detected.
Please refer to the Python regex documentation for more information. Please refer to the Python regex documentation for more information.
#### [`PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT=<bool>`](#PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT) {#PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT}
: Enables splitting of documents on tag barcodes, similar to how ASN barcodes work.
When enabled, documents will be split into separate PDFs at pages containing
tag barcodes that match the configured `PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING`
patterns. The page with the tag barcode will be retained in the new document.
Each split document will have the detected tags assigned to it.
This only has an effect if `PAPERLESS_CONSUMER_ENABLE_TAG_BARCODE` is also enabled.
Defaults to false.
## Audit Trail ## Audit Trail
#### [`PAPERLESS_AUDIT_LOG_ENABLED=<bool>`](#PAPERLESS_AUDIT_LOG_ENABLED) {#PAPERLESS_AUDIT_LOG_ENABLED} #### [`PAPERLESS_AUDIT_LOG_ENABLED=<bool>`](#PAPERLESS_AUDIT_LOG_ENABLED) {#PAPERLESS_AUDIT_LOG_ENABLED}

View File

@@ -481,147 +481,3 @@ To get started:
5. The project is ready for debugging, start either run the fullstack debug or individual debug 5. The project is ready for debugging, start either run the fullstack debug or individual debug
processes. Yo spin up the project without debugging run the task **Project Start: Run all Services** processes. Yo spin up the project without debugging run the task **Project Start: Run all Services**
## Developing Date Parser Plugins
Paperless-ngx uses a plugin system for date parsing, allowing you to extend or replace the default date parsing behavior. Plugins are discovered using [Python entry points](https://setuptools.pypa.io/en/latest/userguide/entry_point.html).
### Creating a Date Parser Plugin
To create a custom date parser plugin, you need to:
1. Create a class that inherits from `DateParserPluginBase`
2. Implement the required abstract method
3. Register your plugin via an entry point
#### 1. Implementing the Parser Class
Your parser must extend `documents.plugins.date_parsing.DateParserPluginBase` and implement the `parse` method:
```python
from collections.abc import Iterator
import datetime
from documents.plugins.date_parsing import DateParserPluginBase
class MyDateParserPlugin(DateParserPluginBase):
"""
Custom date parser implementation.
"""
def parse(self, filename: str, content: str) -> Iterator[datetime.datetime]:
"""
Parse dates from the document's filename and content.
Args:
filename: The original filename of the document
content: The extracted text content of the document
Yields:
datetime.datetime: Valid datetime objects found in the document
"""
# Your parsing logic here
# Use self.config to access configuration settings
# Example: parse dates from filename first
if self.config.filename_date_order:
# Your filename parsing logic
yield some_datetime
# Then parse dates from content
# Your content parsing logic
yield another_datetime
```
#### 2. Configuration and Helper Methods
Your parser instance is initialized with a `DateParserConfig` object accessible via `self.config`. This provides:
- `languages: list[str]` - List of language codes for date parsing
- `timezone_str: str` - Timezone string for date localization
- `ignore_dates: set[datetime.date]` - Dates that should be filtered out
- `reference_time: datetime.datetime` - Current time for filtering future dates
- `filename_date_order: str | None` - Date order preference for filenames (e.g., "DMY", "MDY")
- `content_date_order: str` - Date order preference for content
The base class provides two helper methods you can use:
```python
def _parse_string(
self,
date_string: str,
date_order: str,
) -> datetime.datetime | None:
"""
Parse a single date string using dateparser with configured settings.
"""
def _filter_date(
self,
date: datetime.datetime | None,
) -> datetime.datetime | None:
"""
Validate a parsed datetime against configured rules.
Filters out dates before 1900, future dates, and ignored dates.
"""
```
#### 3. Resource Management (Optional)
If your plugin needs to acquire or release resources (database connections, API clients, etc.), override the context manager methods. Paperless-ngx will always use plugins as context managers, ensuring resources can be released even in the event of errors.
#### 4. Registering Your Plugin
Register your plugin using a setuptools entry point in your package's `pyproject.toml`:
```toml
[project.entry-points."paperless_ngx.date_parsers"]
my_parser = "my_package.parsers:MyDateParserPlugin"
```
The entry point name (e.g., `"my_parser"`) is used for sorting when multiple plugins are found. Paperless-ngx will use the first plugin alphabetically by name if multiple plugins are discovered.
### Plugin Discovery
Paperless-ngx automatically discovers and loads date parser plugins at runtime. The discovery process:
1. Queries the `paperless_ngx.date_parsers` entry point group
2. Validates that each plugin is a subclass of `DateParserPluginBase`
3. Sorts valid plugins alphabetically by entry point name
4. Uses the first valid plugin, or falls back to the default `RegexDateParserPlugin` if none are found
If multiple plugins are installed, a warning is logged indicating which plugin was selected.
### Example: Simple Date Parser
Here's a minimal example that only looks for ISO 8601 dates:
```python
import datetime
import re
from collections.abc import Iterator
from documents.plugins.date_parsing.base import DateParserPluginBase
class ISODateParserPlugin(DateParserPluginBase):
"""
Parser that only matches ISO 8601 formatted dates (YYYY-MM-DD).
"""
ISO_REGEX = re.compile(r"\b(\d{4}-\d{2}-\d{2})\b")
def parse(self, filename: str, content: str) -> Iterator[datetime.datetime]:
# Combine filename and content for searching
text = f"{filename} {content}"
for match in self.ISO_REGEX.finditer(text):
date_string = match.group(1)
# Use helper method to parse with configured timezone
date = self._parse_string(date_string, "YMD")
# Use helper method to validate the date
filtered_date = self._filter_date(date)
if filtered_date is not None:
yield filtered_date
```

View File

@@ -33,6 +33,8 @@
"**/coverage.json": true "**/coverage.json": true
}, },
"python.defaultInterpreterPath": ".venv/bin/python3", "python.defaultInterpreterPath": ".venv/bin/python3",
"python.analysis.inlayHints.pytestParameters": true,
"python.testing.pytestEnabled": true,
}, },
"extensions": { "extensions": {
"recommendations": ["ms-python.python", "charliermarsh.ruff", "editorconfig.editorconfig"], "recommendations": ["ms-python.python", "charliermarsh.ruff", "editorconfig.editorconfig"],

View File

@@ -66,6 +66,7 @@
#PAPERLESS_CONSUMER_BARCODE_DPI=300 #PAPERLESS_CONSUMER_BARCODE_DPI=300
#PAPERLESS_CONSUMER_ENABLE_TAG_BARCODE=false #PAPERLESS_CONSUMER_ENABLE_TAG_BARCODE=false
#PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING={"TAG:(.*)": "\\g<1>"} #PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING={"TAG:(.*)": "\\g<1>"}
#PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT=false
#PAPERLESS_CONSUMER_ENABLE_COLLATE_DOUBLE_SIDED=false #PAPERLESS_CONSUMER_ENABLE_COLLATE_DOUBLE_SIDED=false
#PAPERLESS_CONSUMER_COLLATE_DOUBLE_SIDED_SUBDIR_NAME=double-sided #PAPERLESS_CONSUMER_COLLATE_DOUBLE_SIDED_SUBDIR_NAME=double-sided
#PAPERLESS_CONSUMER_COLLATE_DOUBLE_SIDED_TIFF_SUPPORT=false #PAPERLESS_CONSUMER_COLLATE_DOUBLE_SIDED_TIFF_SUPPORT=false

View File

@@ -306,7 +306,6 @@ markers = [
"gotenberg: Tests requiring Gotenberg service", "gotenberg: Tests requiring Gotenberg service",
"tika: Tests requiring Tika service", "tika: Tests requiring Tika service",
"greenmail: Tests requiring Greenmail service", "greenmail: Tests requiring Greenmail service",
"date_parsing: Tests which cover date parsing from content or filename",
] ]
[tool.pytest_env] [tool.pytest_env]
@@ -333,10 +332,6 @@ exclude_also = [
[tool.mypy] [tool.mypy]
mypy_path = "src" mypy_path = "src"
files = [
"src/documents/plugins/date_parsing",
"src/documents/tests/date_parsing",
]
plugins = [ plugins = [
"mypy_django_plugin.main", "mypy_django_plugin.main",
"mypy_drf_plugin.main", "mypy_drf_plugin.main",
@@ -348,28 +343,5 @@ disallow_untyped_defs = true
warn_redundant_casts = true warn_redundant_casts = true
warn_unused_ignores = true warn_unused_ignores = true
# This prevents errors from imports, but allows type-checking logic to work
follow_imports = "silent"
[[tool.mypy.overrides]]
module = [
"documents.*",
"paperless.*",
"paperless_ai.*",
"paperless_mail.*",
"paperless_tesseract.*",
"paperless_remote.*",
"paperless_text.*",
"paperless_tika.*",
]
ignore_errors = true
[[tool.mypy.overrides]]
module = [
"documents.plugins.date_parsing.*",
"documents.tests.date_parsing.*",
]
ignore_errors = false
[tool.django-stubs] [tool.django-stubs]
django_settings_module = "paperless.settings" django_settings_module = "paperless.settings"

View File

@@ -10412,60 +10412,67 @@
<context context-type="linenumber">269</context> <context context-type="linenumber">269</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="8880243885140172279" datatype="html">
<source>Split on Tag Barcodes</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">276</context>
</context-group>
</trans-unit>
<trans-unit id="7011909364081812031" datatype="html"> <trans-unit id="7011909364081812031" datatype="html">
<source>AI Enabled</source> <source>AI Enabled</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">276</context> <context context-type="linenumber">283</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="8028880048909383956" datatype="html"> <trans-unit id="8028880048909383956" datatype="html">
<source>Consider privacy implications when enabling AI features, especially if using a remote model.</source> <source>Consider privacy implications when enabling AI features, especially if using a remote model.</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">280</context> <context context-type="linenumber">287</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="8131374115579345652" datatype="html"> <trans-unit id="8131374115579345652" datatype="html">
<source>LLM Embedding Backend</source> <source>LLM Embedding Backend</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">284</context> <context context-type="linenumber">291</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="6647708571891295756" datatype="html"> <trans-unit id="6647708571891295756" datatype="html">
<source>LLM Embedding Model</source> <source>LLM Embedding Model</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">292</context> <context context-type="linenumber">299</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="4234495692726214397" datatype="html"> <trans-unit id="4234495692726214397" datatype="html">
<source>LLM Backend</source> <source>LLM Backend</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">299</context> <context context-type="linenumber">306</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="7935234833834000002" datatype="html"> <trans-unit id="7935234833834000002" datatype="html">
<source>LLM Model</source> <source>LLM Model</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">307</context> <context context-type="linenumber">314</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="1980550530387803165" datatype="html"> <trans-unit id="1980550530387803165" datatype="html">
<source>LLM API Key</source> <source>LLM API Key</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">314</context> <context context-type="linenumber">321</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="6126617860376156501" datatype="html"> <trans-unit id="6126617860376156501" datatype="html">
<source>LLM Endpoint</source> <source>LLM Endpoint</source>
<context-group purpose="location"> <context-group purpose="location">
<context context-type="sourcefile">src/app/data/paperless-config.ts</context> <context context-type="sourcefile">src/app/data/paperless-config.ts</context>
<context context-type="linenumber">321</context> <context context-type="linenumber">328</context>
</context-group> </context-group>
</trans-unit> </trans-unit>
<trans-unit id="4416413576346763682" datatype="html"> <trans-unit id="4416413576346763682" datatype="html">

View File

@@ -271,6 +271,13 @@ export const PaperlessConfigOptions: ConfigOption[] = [
config_key: 'PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING', config_key: 'PAPERLESS_CONSUMER_TAG_BARCODE_MAPPING',
category: ConfigCategory.Barcode, category: ConfigCategory.Barcode,
}, },
{
key: 'barcode_tag_split',
title: $localize`Split on Tag Barcodes`,
type: ConfigOptionType.Boolean,
config_key: 'PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT',
category: ConfigCategory.Barcode,
},
{ {
key: 'ai_enabled', key: 'ai_enabled',
title: $localize`AI Enabled`, title: $localize`AI Enabled`,
@@ -352,6 +359,7 @@ export interface PaperlessConfig extends ObjectWithId {
barcode_max_pages: number barcode_max_pages: number
barcode_enable_tag: boolean barcode_enable_tag: boolean
barcode_tag_mapping: object barcode_tag_mapping: object
barcode_tag_split: boolean
ai_enabled: boolean ai_enabled: boolean
llm_embedding_backend: string llm_embedding_backend: string
llm_embedding_model: string llm_embedding_model: string

View File

@@ -16,6 +16,7 @@ from pikepdf import Pdf
from documents.converters import convert_from_tiff_to_pdf from documents.converters import convert_from_tiff_to_pdf
from documents.data_models import ConsumableDocument from documents.data_models import ConsumableDocument
from documents.data_models import DocumentMetadataOverrides from documents.data_models import DocumentMetadataOverrides
from documents.models import Document
from documents.models import Tag from documents.models import Tag
from documents.plugins.base import ConsumeTaskPlugin from documents.plugins.base import ConsumeTaskPlugin
from documents.plugins.base import StopConsumeTaskError from documents.plugins.base import StopConsumeTaskError
@@ -60,6 +61,20 @@ class Barcode:
""" """
return self.value.startswith(self.settings.barcode_asn_prefix) return self.value.startswith(self.settings.barcode_asn_prefix)
@property
def is_tag(self) -> bool:
"""
Returns True if the barcode value matches any configured tag mapping pattern,
False otherwise.
Note: This does NOT exclude ASN or separator barcodes - they can also be used
as tags if they match a tag mapping pattern (e.g., {"ASN12.*": "JOHN"}).
"""
for regex in self.settings.barcode_tag_mapping:
if re.match(regex, self.value, flags=re.IGNORECASE):
return True
return False
class BarcodePlugin(ConsumeTaskPlugin): class BarcodePlugin(ConsumeTaskPlugin):
NAME: str = "BarcodePlugin" NAME: str = "BarcodePlugin"
@@ -115,6 +130,24 @@ class BarcodePlugin(ConsumeTaskPlugin):
self._tiff_conversion_done = False self._tiff_conversion_done = False
self.barcodes: list[Barcode] = [] self.barcodes: list[Barcode] = []
def _apply_detected_asn(self, detected_asn: int) -> None:
"""
Apply a detected ASN to metadata if allowed.
"""
if (
self.metadata.skip_asn_if_exists
and Document.global_objects.filter(
archive_serial_number=detected_asn,
).exists()
):
logger.info(
f"Found ASN in barcode {detected_asn} but skipping because it already exists.",
)
return
logger.info(f"Found ASN in barcode: {detected_asn}")
self.metadata.asn = detected_asn
def run(self) -> None: def run(self) -> None:
# Some operations may use PIL, override pixel setting if needed # Some operations may use PIL, override pixel setting if needed
maybe_override_pixel_limit() maybe_override_pixel_limit()
@@ -126,8 +159,14 @@ class BarcodePlugin(ConsumeTaskPlugin):
self.detect() self.detect()
# try reading tags from barcodes # try reading tags from barcodes
# If tag splitting is enabled, skip this on the original document - let each split document extract its own tags
# However, if we're processing a split document (original_path is set), extract tags
if ( if (
self.settings.barcode_enable_tag self.settings.barcode_enable_tag
and (
not self.settings.barcode_tag_split
or self.input_doc.original_path is not None
)
and (tags := self.tags) is not None and (tags := self.tags) is not None
and len(tags) > 0 and len(tags) > 0
): ):
@@ -186,13 +225,8 @@ class BarcodePlugin(ConsumeTaskPlugin):
# Update/overwrite an ASN if possible # Update/overwrite an ASN if possible
# After splitting, as otherwise each split document gets the same ASN # After splitting, as otherwise each split document gets the same ASN
if ( if self.settings.barcode_enable_asn and (located_asn := self.asn) is not None:
self.settings.barcode_enable_asn self._apply_detected_asn(located_asn)
and not self.metadata.skip_asn
and (located_asn := self.asn) is not None
):
logger.info(f"Found ASN in barcode: {located_asn}")
self.metadata.asn = located_asn
def cleanup(self) -> None: def cleanup(self) -> None:
self.temp_dir.cleanup() self.temp_dir.cleanup()
@@ -432,16 +466,25 @@ class BarcodePlugin(ConsumeTaskPlugin):
for bc in self.barcodes for bc in self.barcodes
if bc.is_separator and (not retain or (retain and bc.page > 0)) if bc.is_separator and (not retain or (retain and bc.page > 0))
} # as below, dont include the first page if retain is enabled } # as below, dont include the first page if retain is enabled
if not self.settings.barcode_enable_asn:
return separator_pages
# add the page numbers of the ASN barcodes # add the page numbers of the ASN barcodes
# (except for first page, that might lead to infinite loops). # (except for first page, that might lead to infinite loops).
return { if self.settings.barcode_enable_asn:
separator_pages = {
**separator_pages, **separator_pages,
**{bc.page: True for bc in self.barcodes if bc.is_asn and bc.page != 0}, **{bc.page: True for bc in self.barcodes if bc.is_asn and bc.page != 0},
} }
# add the page numbers of the TAG barcodes if splitting is enabled
# (except for first page, that might lead to infinite loops).
if self.settings.barcode_tag_split and self.settings.barcode_enable_tag:
separator_pages = {
**separator_pages,
**{bc.page: True for bc in self.barcodes if bc.is_tag and bc.page != 0},
}
return separator_pages
def separate_pages(self, pages_to_split_on: dict[int, bool]) -> list[Path]: def separate_pages(self, pages_to_split_on: dict[int, bool]) -> list[Path]:
""" """
Separate the provided pdf file on the pages_to_split_on. Separate the provided pdf file on the pages_to_split_on.

View File

@@ -7,7 +7,6 @@ from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from typing import Literal from typing import Literal
from celery import chain
from celery import chord from celery import chord
from celery import group from celery import group
from celery import shared_task from celery import shared_task
@@ -38,6 +37,42 @@ if TYPE_CHECKING:
logger: logging.Logger = logging.getLogger("paperless.bulk_edit") logger: logging.Logger = logging.getLogger("paperless.bulk_edit")
@shared_task(bind=True)
def restore_archive_serial_numbers_task(
self,
backup: dict[int, int],
*args,
**kwargs,
) -> None:
restore_archive_serial_numbers(backup)
def release_archive_serial_numbers(doc_ids: list[int]) -> dict[int, int]:
"""
Clears ASNs on documents that are about to be replaced so new documents
can be assigned ASNs without uniqueness collisions. Returns a backup map
of doc_id -> previous ASN for potential restoration.
"""
qs = Document.objects.filter(
id__in=doc_ids,
archive_serial_number__isnull=False,
).only("pk", "archive_serial_number")
backup = dict(qs.values_list("pk", "archive_serial_number"))
qs.update(archive_serial_number=None)
logger.info(f"Released archive serial numbers for documents {list(backup.keys())}")
return backup
def restore_archive_serial_numbers(backup: dict[int, int]) -> None:
"""
Restores ASNs using the provided backup map, intended for
rollback when replacement consumption fails.
"""
for doc_id, asn in backup.items():
Document.objects.filter(pk=doc_id).update(archive_serial_number=asn)
logger.info(f"Restored archive serial numbers for documents {list(backup.keys())}")
def set_correspondent( def set_correspondent(
doc_ids: list[int], doc_ids: list[int],
correspondent: Correspondent, correspondent: Correspondent,
@@ -386,6 +421,7 @@ def merge(
merged_pdf = pikepdf.new() merged_pdf = pikepdf.new()
version: str = merged_pdf.pdf_version version: str = merged_pdf.pdf_version
handoff_asn: int | None = None
# use doc_ids to preserve order # use doc_ids to preserve order
for doc_id in doc_ids: for doc_id in doc_ids:
doc = qs.get(id=doc_id) doc = qs.get(id=doc_id)
@@ -401,6 +437,8 @@ def merge(
version = max(version, pdf.pdf_version) version = max(version, pdf.pdf_version)
merged_pdf.pages.extend(pdf.pages) merged_pdf.pages.extend(pdf.pages)
affected_docs.append(doc.id) affected_docs.append(doc.id)
if handoff_asn is None and doc.archive_serial_number is not None:
handoff_asn = doc.archive_serial_number
except Exception as e: except Exception as e:
logger.exception( logger.exception(
f"Error merging document {doc.id}, it will not be included in the merge: {e}", f"Error merging document {doc.id}, it will not be included in the merge: {e}",
@@ -426,6 +464,8 @@ def merge(
DocumentMetadataOverrides.from_document(metadata_document) DocumentMetadataOverrides.from_document(metadata_document)
) )
overrides.title = metadata_document.title + " (merged)" overrides.title = metadata_document.title + " (merged)"
if metadata_document.archive_serial_number is not None:
handoff_asn = metadata_document.archive_serial_number
else: else:
overrides = DocumentMetadataOverrides() overrides = DocumentMetadataOverrides()
else: else:
@@ -433,8 +473,11 @@ def merge(
if user is not None: if user is not None:
overrides.owner_id = user.id overrides.owner_id = user.id
# Avoid copying or detecting ASN from merged PDFs to prevent collision if not delete_originals:
overrides.skip_asn = True overrides.skip_asn_if_exists = True
if delete_originals and handoff_asn is not None:
overrides.asn = handoff_asn
logger.info("Adding merged document to the task queue.") logger.info("Adding merged document to the task queue.")
@@ -447,10 +490,18 @@ def merge(
) )
if delete_originals: if delete_originals:
backup = release_archive_serial_numbers(affected_docs)
logger.info( logger.info(
"Queueing removal of original documents after consumption of merged document", "Queueing removal of original documents after consumption of merged document",
) )
chain(consume_task, delete.si(affected_docs)).delay() try:
consume_task.apply_async(
link=[delete.si(affected_docs)],
link_error=[restore_archive_serial_numbers_task.s(backup)],
)
except Exception:
restore_archive_serial_numbers(backup)
raise
else: else:
consume_task.delay() consume_task.delay()
@@ -494,6 +545,8 @@ def split(
overrides.title = f"{doc.title} (split {idx + 1})" overrides.title = f"{doc.title} (split {idx + 1})"
if user is not None: if user is not None:
overrides.owner_id = user.id overrides.owner_id = user.id
if not delete_originals:
overrides.skip_asn_if_exists = True
logger.info( logger.info(
f"Adding split document with pages {split_doc} to the task queue.", f"Adding split document with pages {split_doc} to the task queue.",
) )
@@ -508,10 +561,20 @@ def split(
) )
if delete_originals: if delete_originals:
backup = release_archive_serial_numbers([doc.id])
logger.info( logger.info(
"Queueing removal of original document after consumption of the split documents", "Queueing removal of original document after consumption of the split documents",
) )
chord(header=consume_tasks, body=delete.si([doc.id])).delay() try:
chord(
header=consume_tasks,
body=delete.si([doc.id]),
).apply_async(
link_error=[restore_archive_serial_numbers_task.s(backup)],
)
except Exception:
restore_archive_serial_numbers(backup)
raise
else: else:
group(consume_tasks).delay() group(consume_tasks).delay()
@@ -614,7 +677,10 @@ def edit_pdf(
) )
if user is not None: if user is not None:
overrides.owner_id = user.id overrides.owner_id = user.id
if not delete_original:
overrides.skip_asn_if_exists = True
if delete_original and len(pdf_docs) == 1:
overrides.asn = doc.archive_serial_number
for idx, pdf in enumerate(pdf_docs, start=1): for idx, pdf in enumerate(pdf_docs, start=1):
filepath: Path = ( filepath: Path = (
Path(tempfile.mkdtemp(dir=settings.SCRATCH_DIR)) Path(tempfile.mkdtemp(dir=settings.SCRATCH_DIR))
@@ -633,7 +699,17 @@ def edit_pdf(
) )
if delete_original: if delete_original:
chord(header=consume_tasks, body=delete.si([doc.id])).delay() backup = release_archive_serial_numbers([doc.id])
try:
chord(
header=consume_tasks,
body=delete.si([doc.id]),
).apply_async(
link_error=[restore_archive_serial_numbers_task.s(backup)],
)
except Exception:
restore_archive_serial_numbers(backup)
raise
else: else:
group(consume_tasks).delay() group(consume_tasks).delay()

View File

@@ -32,12 +32,12 @@ from documents.models import WorkflowTrigger
from documents.parsers import DocumentParser from documents.parsers import DocumentParser
from documents.parsers import ParseError from documents.parsers import ParseError
from documents.parsers import get_parser_class_for_mime_type from documents.parsers import get_parser_class_for_mime_type
from documents.parsers import parse_date
from documents.permissions import set_permissions_for_object from documents.permissions import set_permissions_for_object
from documents.plugins.base import AlwaysRunPluginMixin from documents.plugins.base import AlwaysRunPluginMixin
from documents.plugins.base import ConsumeTaskPlugin from documents.plugins.base import ConsumeTaskPlugin
from documents.plugins.base import NoCleanupPluginMixin from documents.plugins.base import NoCleanupPluginMixin
from documents.plugins.base import NoSetupPluginMixin from documents.plugins.base import NoSetupPluginMixin
from documents.plugins.date_parsing import get_date_parser
from documents.plugins.helpers import ProgressManager from documents.plugins.helpers import ProgressManager
from documents.plugins.helpers import ProgressStatusOptions from documents.plugins.helpers import ProgressStatusOptions
from documents.signals import document_consumption_finished from documents.signals import document_consumption_finished
@@ -426,8 +426,7 @@ class ConsumerPlugin(
ProgressStatusOptions.WORKING, ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.PARSE_DATE, ConsumerStatusShortMessage.PARSE_DATE,
) )
with get_date_parser() as date_parser: date = parse_date(self.filename, text)
date = next(date_parser.parse(self.filename, text), None)
archive_path = document_parser.get_archive_path() archive_path = document_parser.get_archive_path()
page_count = document_parser.get_page_count(self.working_copy, mime_type) page_count = document_parser.get_page_count(self.working_copy, mime_type)
@@ -691,7 +690,7 @@ class ConsumerPlugin(
pk=self.metadata.storage_path_id, pk=self.metadata.storage_path_id,
) )
if self.metadata.asn is not None and not self.metadata.skip_asn: if self.metadata.asn is not None:
document.archive_serial_number = self.metadata.asn document.archive_serial_number = self.metadata.asn
if self.metadata.owner_id: if self.metadata.owner_id:
@@ -833,8 +832,8 @@ class ConsumerPreflightPlugin(
""" """
Check that if override_asn is given, it is unique and within a valid range Check that if override_asn is given, it is unique and within a valid range
""" """
if self.metadata.skip_asn or self.metadata.asn is None: if self.metadata.asn is None:
# if skip is set or ASN is None # if ASN is None
return return
# Validate the range is above zero and less than uint32_t max # Validate the range is above zero and less than uint32_t max
# otherwise, Whoosh can't handle it in the index # otherwise, Whoosh can't handle it in the index

View File

@@ -30,7 +30,7 @@ class DocumentMetadataOverrides:
change_users: list[int] | None = None change_users: list[int] | None = None
change_groups: list[int] | None = None change_groups: list[int] | None = None
custom_fields: dict | None = None custom_fields: dict | None = None
skip_asn: bool = False skip_asn_if_exists: bool = False
def update(self, other: "DocumentMetadataOverrides") -> "DocumentMetadataOverrides": def update(self, other: "DocumentMetadataOverrides") -> "DocumentMetadataOverrides":
""" """
@@ -50,8 +50,8 @@ class DocumentMetadataOverrides:
self.storage_path_id = other.storage_path_id self.storage_path_id = other.storage_path_id
if other.owner_id is not None: if other.owner_id is not None:
self.owner_id = other.owner_id self.owner_id = other.owner_id
if other.skip_asn: if other.skip_asn_if_exists:
self.skip_asn = True self.skip_asn_if_exists = True
# merge # merge
if self.tag_ids is None: if self.tag_ids is None:

View File

@@ -9,17 +9,22 @@ import subprocess
import tempfile import tempfile
from functools import lru_cache from functools import lru_cache
from pathlib import Path from pathlib import Path
from re import Match
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from django.conf import settings from django.conf import settings
from django.utils import timezone
from documents.loggers import LoggingMixin from documents.loggers import LoggingMixin
from documents.signals import document_consumer_declaration from documents.signals import document_consumer_declaration
from documents.utils import copy_file_with_basic_stats from documents.utils import copy_file_with_basic_stats
from documents.utils import run_subprocess from documents.utils import run_subprocess
from paperless.config import OcrConfig
from paperless.utils import ocr_to_dateparser_languages
if TYPE_CHECKING: if TYPE_CHECKING:
import datetime import datetime
from collections.abc import Iterator
# This regular expression will try to find dates in the document at # This regular expression will try to find dates in the document at
# hand and will match the following formats: # hand and will match the following formats:
@@ -254,6 +259,75 @@ def make_thumbnail_from_pdf(in_path: Path, temp_dir: Path, logging_group=None) -
return out_path return out_path
def parse_date(filename, text) -> datetime.datetime | None:
return next(parse_date_generator(filename, text), None)
def parse_date_generator(filename, text) -> Iterator[datetime.datetime]:
"""
Returns the date of the document.
"""
def __parser(ds: str, date_order: str) -> datetime.datetime:
"""
Call dateparser.parse with a particular date ordering
"""
import dateparser
ocr_config = OcrConfig()
languages = settings.DATE_PARSER_LANGUAGES or ocr_to_dateparser_languages(
ocr_config.language,
)
return dateparser.parse(
ds,
settings={
"DATE_ORDER": date_order,
"PREFER_DAY_OF_MONTH": "first",
"RETURN_AS_TIMEZONE_AWARE": True,
"TIMEZONE": settings.TIME_ZONE,
},
locales=languages,
)
def __filter(date: datetime.datetime) -> datetime.datetime | None:
if (
date is not None
and date.year > 1900
and date <= timezone.now()
and date.date() not in settings.IGNORE_DATES
):
return date
return None
def __process_match(
match: Match[str],
date_order: str,
) -> datetime.datetime | None:
date_string = match.group(0)
try:
date = __parser(date_string, date_order)
except Exception:
# Skip all matches that do not parse to a proper date
date = None
return __filter(date)
def __process_content(content: str, date_order: str) -> Iterator[datetime.datetime]:
for m in re.finditer(DATE_REGEX, content):
date = __process_match(m, date_order)
if date is not None:
yield date
# if filename date parsing is enabled, search there first:
if settings.FILENAME_DATE_ORDER:
yield from __process_content(filename, settings.FILENAME_DATE_ORDER)
# Iterate through all regex matches in text and try to parse the date
yield from __process_content(text, settings.DATE_ORDER)
class ParseError(Exception): class ParseError(Exception):
pass pass

View File

@@ -1,100 +0,0 @@
import logging
from functools import lru_cache
from importlib.metadata import EntryPoint
from importlib.metadata import entry_points
from typing import Final
from django.conf import settings
from django.utils import timezone
from documents.plugins.date_parsing.base import DateParserConfig
from documents.plugins.date_parsing.base import DateParserPluginBase
from documents.plugins.date_parsing.regex_parser import RegexDateParserPlugin
from paperless.utils import ocr_to_dateparser_languages
logger = logging.getLogger(__name__)
DATE_PARSER_ENTRY_POINT_GROUP: Final = "paperless_ngx.date_parsers"
@lru_cache(maxsize=1)
def _discover_parser_class() -> type[DateParserPluginBase]:
"""
Discovers the date parser plugin class to use.
- If one or more plugins are found, sorts them by name and returns the first.
- If no plugins are found, returns the default RegexDateParser.
"""
eps: tuple[EntryPoint, ...]
try:
eps = entry_points(group=DATE_PARSER_ENTRY_POINT_GROUP)
except Exception as e:
# Log a warning
logger.warning(f"Could not query entry points for date parsers: {e}")
eps = ()
valid_plugins: list[EntryPoint] = []
for ep in eps:
try:
plugin_class = ep.load()
if plugin_class and issubclass(plugin_class, DateParserPluginBase):
valid_plugins.append(ep)
else:
logger.warning(f"Plugin {ep.name} does not subclass DateParser.")
except Exception as e:
logger.error(f"Unable to load date parser plugin {ep.name}: {e}")
if not valid_plugins:
return RegexDateParserPlugin
valid_plugins.sort(key=lambda ep: ep.name)
if len(valid_plugins) > 1:
logger.warning(
f"Multiple date parsers found: "
f"{[ep.name for ep in valid_plugins]}. "
f"Using the first one by name: '{valid_plugins[0].name}'.",
)
return valid_plugins[0].load()
def get_date_parser() -> DateParserPluginBase:
"""
Factory function to get an initialized date parser instance.
This function is responsible for:
1. Discovering the correct parser class (plugin or default).
2. Loading configuration from Django settings.
3. Instantiating the parser with the configuration.
"""
# 1. Discover the class (this is cached)
parser_class = _discover_parser_class()
# 2. Load configuration from settings
# TODO: Get the language from the settings and/or configuration object, depending
languages = languages = (
settings.DATE_PARSER_LANGUAGES
or ocr_to_dateparser_languages(settings.OCR_LANGUAGE)
)
config = DateParserConfig(
languages=languages,
timezone_str=settings.TIME_ZONE,
ignore_dates=settings.IGNORE_DATES,
reference_time=timezone.now(),
filename_date_order=settings.FILENAME_DATE_ORDER,
content_date_order=settings.DATE_ORDER,
)
# 3. Instantiate the discovered class with the config
return parser_class(config=config)
__all__ = [
"DateParserConfig",
"DateParserPluginBase",
"RegexDateParserPlugin",
"get_date_parser",
]

View File

@@ -1,124 +0,0 @@
import datetime
import logging
from abc import ABC
from abc import abstractmethod
from collections.abc import Iterator
from dataclasses import dataclass
from types import TracebackType
try:
from typing import Self
except ImportError:
from typing_extensions import Self
import dateparser
logger = logging.getLogger(__name__)
@dataclass(frozen=True, slots=True)
class DateParserConfig:
"""
Configuration for a DateParser instance.
This object is created by the factory and passed to the
parser's constructor, decoupling the parser from settings.
"""
languages: list[str]
timezone_str: str
ignore_dates: set[datetime.date]
# A "now" timestamp for filtering future dates.
# Passed in by the factory.
reference_time: datetime.datetime
# Settings for the default RegexDateParser
# Other plugins should use or consider these, but it is not required
filename_date_order: str | None
content_date_order: str
class DateParserPluginBase(ABC):
"""
Abstract base class for date parsing strategies.
Instances are configured via a DateParserConfig object.
"""
def __init__(self, config: DateParserConfig):
"""
Initializes the parser with its configuration.
"""
self.config = config
def __enter__(self) -> Self:
"""
Enter the runtime context related to this object.
Subclasses can override this to acquire resources (connections, handles).
"""
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
"""
Exit the runtime context related to this object.
Subclasses can override this to release resources.
"""
# Default implementation does nothing.
# Returning None implies exceptions are propagated.
def _parse_string(
self,
date_string: str,
date_order: str,
) -> datetime.datetime | None:
"""
Helper method to parse a single date string using dateparser.
Uses configuration from `self.config`.
"""
try:
return dateparser.parse(
date_string,
settings={
"DATE_ORDER": date_order,
"PREFER_DAY_OF_MONTH": "first",
"RETURN_AS_TIMEZONE_AWARE": True,
"TIMEZONE": self.config.timezone_str,
},
locales=self.config.languages,
)
except Exception as e:
logger.error(f"Error while parsing date string '{date_string}': {e}")
return None
def _filter_date(
self,
date: datetime.datetime | None,
) -> datetime.datetime | None:
"""
Helper method to validate a parsed datetime object.
Uses configuration from `self.config`.
"""
if (
date is not None
and date.year > 1900
and date <= self.config.reference_time
and date.date() not in self.config.ignore_dates
):
return date
return None
@abstractmethod
def parse(self, filename: str, content: str) -> Iterator[datetime.datetime]:
"""
Parses a document's filename and content, yielding valid datetime objects.
"""

View File

@@ -1,65 +0,0 @@
import datetime
import re
from collections.abc import Iterator
from re import Match
from documents.plugins.date_parsing.base import DateParserPluginBase
class RegexDateParserPlugin(DateParserPluginBase):
"""
The default date parser, using a series of regular expressions.
It is configured entirely by the DateParserConfig object
passed to its constructor.
"""
DATE_REGEX = re.compile(
r"(\b|(?!=([_-])))(\d{1,2})[\.\/-](\d{1,2})[\.\/-](\d{4}|\d{2})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\d{4}|\d{2})[\.\/-](\d{1,2})[\.\/-](\d{1,2})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\d{1,2}[\. ]+[a-zéûäëčžúřěáíóńźçŞğü]{3,9} \d{4}|[a-zéûäëčžúřěáíóńźçŞğü]{3,9} \d{1,2}, \d{4})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))([^\W\d_]{3,9} \d{1,2}, (\d{4}))(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))([^\W\d_]{3,9} \d{4})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\d{1,2}[^ 0-9]{2}[\. ]+[^ ]{3,9}[ \.\/-]\d{4})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\b\d{1,2}[ \.\/-][a-zéûäëčžúřěáíóńźçŞğü]{3}[ \.\/-]\d{4})(\b|(?=([_-])))",
re.IGNORECASE,
)
def _process_match(
self,
match: Match[str],
date_order: str,
) -> datetime.datetime | None:
"""
Processes a single regex match using the base class helpers.
"""
date_string = match.group(0)
date = self._parse_string(date_string, date_order)
return self._filter_date(date)
def _process_content(
self,
content: str,
date_order: str,
) -> Iterator[datetime.datetime]:
"""
Finds all regex matches in content and yields valid dates.
"""
for m in re.finditer(self.DATE_REGEX, content):
date = self._process_match(m, date_order)
if date is not None:
yield date
def parse(self, filename: str, content: str) -> Iterator[datetime.datetime]:
"""
Implementation of the abstract parse method.
Reads its configuration from `self.config`.
"""
if self.config.filename_date_order:
yield from self._process_content(
filename,
self.config.filename_date_order,
)
yield from self._process_content(content, self.config.content_date_order)

View File

@@ -1,82 +0,0 @@
import datetime
from collections.abc import Generator
from typing import Any
import pytest
import pytest_django
from documents.plugins.date_parsing import _discover_parser_class
from documents.plugins.date_parsing.base import DateParserConfig
from documents.plugins.date_parsing.regex_parser import RegexDateParserPlugin
@pytest.fixture
def base_config() -> DateParserConfig:
"""Basic configuration for date parser testing."""
return DateParserConfig(
languages=["en"],
timezone_str="UTC",
ignore_dates=set(),
reference_time=datetime.datetime(
2024,
1,
15,
12,
0,
0,
tzinfo=datetime.timezone.utc,
),
filename_date_order="YMD",
content_date_order="DMY",
)
@pytest.fixture
def config_with_ignore_dates() -> DateParserConfig:
"""Configuration with dates to ignore."""
return DateParserConfig(
languages=["en", "de"],
timezone_str="America/New_York",
ignore_dates={datetime.date(2024, 1, 1), datetime.date(2024, 12, 25)},
reference_time=datetime.datetime(
2024,
1,
15,
12,
0,
0,
tzinfo=datetime.timezone.utc,
),
filename_date_order="DMY",
content_date_order="MDY",
)
@pytest.fixture
def regex_parser(base_config: DateParserConfig) -> RegexDateParserPlugin:
"""Instance of RegexDateParser with base config."""
return RegexDateParserPlugin(base_config)
@pytest.fixture
def clear_lru_cache() -> Generator[None, None, None]:
"""
Ensure the LRU cache for _discover_parser_class is cleared
before and after any test that depends on it.
"""
_discover_parser_class.cache_clear()
yield
_discover_parser_class.cache_clear()
@pytest.fixture
def mock_date_parser_settings(settings: pytest_django.fixtures.SettingsWrapper) -> Any:
"""
Override Django settings for the duration of date parser tests.
"""
settings.DATE_PARSER_LANGUAGES = ["en", "de"]
settings.TIME_ZONE = "UTC"
settings.IGNORE_DATES = [datetime.date(1900, 1, 1)]
settings.FILENAME_DATE_ORDER = "YMD"
settings.DATE_ORDER = "DMY"
return settings

View File

@@ -1,228 +0,0 @@
import datetime
import logging
from collections.abc import Iterator
from importlib.metadata import EntryPoint
import pytest
import pytest_mock
from django.utils import timezone
from documents.plugins.date_parsing import DATE_PARSER_ENTRY_POINT_GROUP
from documents.plugins.date_parsing import _discover_parser_class
from documents.plugins.date_parsing import get_date_parser
from documents.plugins.date_parsing.base import DateParserConfig
from documents.plugins.date_parsing.base import DateParserPluginBase
from documents.plugins.date_parsing.regex_parser import RegexDateParserPlugin
class AlphaParser(DateParserPluginBase):
def parse(self, filename: str, content: str) -> Iterator[datetime.datetime]:
yield timezone.now()
class BetaParser(DateParserPluginBase):
def parse(self, filename: str, content: str) -> Iterator[datetime.datetime]:
yield timezone.now()
@pytest.mark.date_parsing
@pytest.mark.usefixtures("clear_lru_cache")
class TestDiscoverParserClass:
"""Tests for the _discover_parser_class() function."""
def test_returns_default_when_no_plugins_found(
self,
mocker: pytest_mock.MockerFixture,
) -> None:
mocker.patch(
"documents.plugins.date_parsing.entry_points",
return_value=(),
)
result = _discover_parser_class()
assert result is RegexDateParserPlugin
def test_returns_default_when_entrypoint_query_fails(
self,
mocker: pytest_mock.MockerFixture,
caplog: pytest.LogCaptureFixture,
) -> None:
mocker.patch(
"documents.plugins.date_parsing.entry_points",
side_effect=RuntimeError("boom"),
)
result = _discover_parser_class()
assert result is RegexDateParserPlugin
assert "Could not query entry points" in caplog.text
def test_filters_out_invalid_plugins(
self,
mocker: pytest_mock.MockerFixture,
caplog: pytest.LogCaptureFixture,
) -> None:
fake_ep = mocker.MagicMock(spec=EntryPoint)
fake_ep.name = "bad_plugin"
fake_ep.load.return_value = object # not subclass of DateParser
mocker.patch(
"documents.plugins.date_parsing.entry_points",
return_value=(fake_ep,),
)
result = _discover_parser_class()
assert result is RegexDateParserPlugin
assert "does not subclass DateParser" in caplog.text
def test_skips_plugins_that_fail_to_load(
self,
mocker: pytest_mock.MockerFixture,
caplog: pytest.LogCaptureFixture,
) -> None:
fake_ep = mocker.MagicMock(spec=EntryPoint)
fake_ep.name = "failing_plugin"
fake_ep.load.side_effect = ImportError("cannot import")
mocker.patch(
"documents.plugins.date_parsing.entry_points",
return_value=(fake_ep,),
)
result = _discover_parser_class()
assert result is RegexDateParserPlugin
assert "Unable to load date parser plugin failing_plugin" in caplog.text
def test_returns_single_valid_plugin_without_warning(
self,
mocker: pytest_mock.MockerFixture,
caplog: pytest.LogCaptureFixture,
) -> None:
"""If exactly one valid plugin is discovered, it should be returned without logging a warning."""
ep = mocker.MagicMock(spec=EntryPoint)
ep.name = "alpha"
ep.load.return_value = AlphaParser
mock_entry_points = mocker.patch(
"documents.plugins.date_parsing.entry_points",
return_value=(ep,),
)
with caplog.at_level(
logging.WARNING,
logger="documents.plugins.date_parsing",
):
result = _discover_parser_class()
# It should have called entry_points with the correct group
mock_entry_points.assert_called_once_with(group=DATE_PARSER_ENTRY_POINT_GROUP)
# The discovered class should be exactly our AlphaParser
assert result is AlphaParser
# No warnings should have been logged
assert not any(
"Multiple date parsers found" in record.message for record in caplog.records
), "Unexpected warning logged when only one plugin was found"
def test_returns_first_valid_plugin_by_name(
self,
mocker: pytest_mock.MockerFixture,
) -> None:
ep_a = mocker.MagicMock(spec=EntryPoint)
ep_a.name = "alpha"
ep_a.load.return_value = AlphaParser
ep_b = mocker.MagicMock(spec=EntryPoint)
ep_b.name = "beta"
ep_b.load.return_value = BetaParser
mocker.patch(
"documents.plugins.date_parsing.entry_points",
return_value=(ep_b, ep_a),
)
result = _discover_parser_class()
assert result is AlphaParser
def test_logs_warning_if_multiple_plugins_found(
self,
mocker: pytest_mock.MockerFixture,
caplog: pytest.LogCaptureFixture,
) -> None:
ep1 = mocker.MagicMock(spec=EntryPoint)
ep1.name = "a"
ep1.load.return_value = AlphaParser
ep2 = mocker.MagicMock(spec=EntryPoint)
ep2.name = "b"
ep2.load.return_value = BetaParser
mocker.patch(
"documents.plugins.date_parsing.entry_points",
return_value=(ep1, ep2),
)
with caplog.at_level(
logging.WARNING,
logger="documents.plugins.date_parsing",
):
result = _discover_parser_class()
# Should select alphabetically first plugin ("a")
assert result is AlphaParser
# Should log a warning mentioning multiple parsers
assert any(
"Multiple date parsers found" in record.message for record in caplog.records
), "Expected a warning about multiple date parsers"
def test_cache_behavior_only_runs_once(
self,
mocker: pytest_mock.MockerFixture,
) -> None:
mock_entry_points = mocker.patch(
"documents.plugins.date_parsing.entry_points",
return_value=(),
)
# First call populates cache
_discover_parser_class()
# Second call should not re-invoke entry_points
_discover_parser_class()
mock_entry_points.assert_called_once()
@pytest.mark.date_parsing
@pytest.mark.usefixtures("mock_date_parser_settings")
class TestGetDateParser:
"""Tests for the get_date_parser() factory function."""
def test_returns_instance_of_discovered_class(
self,
mocker: pytest_mock.MockerFixture,
) -> None:
mocker.patch(
"documents.plugins.date_parsing._discover_parser_class",
return_value=AlphaParser,
)
parser = get_date_parser()
assert isinstance(parser, AlphaParser)
assert isinstance(parser.config, DateParserConfig)
assert parser.config.languages == ["en", "de"]
assert parser.config.timezone_str == "UTC"
assert parser.config.ignore_dates == [datetime.date(1900, 1, 1)]
assert parser.config.filename_date_order == "YMD"
assert parser.config.content_date_order == "DMY"
# Check reference_time near now
delta = abs((parser.config.reference_time - timezone.now()).total_seconds())
assert delta < 2
def test_uses_default_regex_parser_when_no_plugins(
self,
mocker: pytest_mock.MockerFixture,
) -> None:
mocker.patch(
"documents.plugins.date_parsing._discover_parser_class",
return_value=RegexDateParserPlugin,
)
parser = get_date_parser()
assert isinstance(parser, RegexDateParserPlugin)

View File

@@ -1,433 +0,0 @@
import datetime
import logging
from typing import Any
import pytest
import pytest_mock
from documents.plugins.date_parsing.base import DateParserConfig
from documents.plugins.date_parsing.regex_parser import RegexDateParserPlugin
@pytest.mark.date_parsing
class TestParseString:
"""Tests for DateParser._parse_string method via RegexDateParser."""
@pytest.mark.parametrize(
("date_string", "date_order", "expected_year"),
[
pytest.param("15/01/2024", "DMY", 2024, id="dmy_slash"),
pytest.param("01/15/2024", "MDY", 2024, id="mdy_slash"),
pytest.param("2024/01/15", "YMD", 2024, id="ymd_slash"),
pytest.param("January 15, 2024", "DMY", 2024, id="month_name_comma"),
pytest.param("15 Jan 2024", "DMY", 2024, id="day_abbr_month_year"),
pytest.param("15.01.2024", "DMY", 2024, id="dmy_dot"),
pytest.param("2024-01-15", "YMD", 2024, id="ymd_dash"),
],
)
def test_parse_string_valid_formats(
self,
regex_parser: RegexDateParserPlugin,
date_string: str,
date_order: str,
expected_year: int,
) -> None:
"""Should correctly parse various valid date formats."""
result = regex_parser._parse_string(date_string, date_order)
assert result is not None
assert result.year == expected_year
@pytest.mark.parametrize(
"invalid_string",
[
pytest.param("not a date", id="plain_text"),
pytest.param("32/13/2024", id="invalid_day_month"),
pytest.param("", id="empty_string"),
pytest.param("abc123xyz", id="alphanumeric_gibberish"),
pytest.param("99/99/9999", id="out_of_range"),
],
)
def test_parse_string_invalid_input(
self,
regex_parser: RegexDateParserPlugin,
invalid_string: str,
) -> None:
"""Should return None for invalid date strings."""
result = regex_parser._parse_string(invalid_string, "DMY")
assert result is None
def test_parse_string_handles_exceptions(
self,
caplog: pytest.LogCaptureFixture,
mocker: pytest_mock.MockerFixture,
regex_parser: RegexDateParserPlugin,
) -> None:
"""Should handle and log exceptions from dateparser gracefully."""
with caplog.at_level(
logging.ERROR,
logger="documents.plugins.date_parsing.base",
):
# We still need to mock dateparser.parse to force the exception
mocker.patch(
"documents.plugins.date_parsing.base.dateparser.parse",
side_effect=ValueError(
"Parsing error: 01/01/2024",
),
)
# 1. Execute the function under test
result = regex_parser._parse_string("01/01/2024", "DMY")
assert result is None
# Check if an error was logged
assert len(caplog.records) == 1
assert caplog.records[0].levelname == "ERROR"
# Check if the specific error message is present
assert "Error while parsing date string" in caplog.text
# Optional: Check for the exact exception message if it's included in the log
assert "Parsing error: 01/01/2024" in caplog.text
@pytest.mark.date_parsing
class TestFilterDate:
"""Tests for DateParser._filter_date method via RegexDateParser."""
@pytest.mark.parametrize(
("date", "expected_output"),
[
# Valid Dates
pytest.param(
datetime.datetime(2024, 1, 10, tzinfo=datetime.timezone.utc),
datetime.datetime(2024, 1, 10, tzinfo=datetime.timezone.utc),
id="valid_past_date",
),
pytest.param(
datetime.datetime(2024, 1, 15, 12, 0, 0, tzinfo=datetime.timezone.utc),
datetime.datetime(2024, 1, 15, 12, 0, 0, tzinfo=datetime.timezone.utc),
id="exactly_at_reference",
),
pytest.param(
datetime.datetime(1901, 1, 1, tzinfo=datetime.timezone.utc),
datetime.datetime(1901, 1, 1, tzinfo=datetime.timezone.utc),
id="year_1901_valid",
),
# Date is > reference_time
pytest.param(
datetime.datetime(2024, 1, 16, tzinfo=datetime.timezone.utc),
None,
id="future_date_day_after",
),
# date.date() in ignore_dates
pytest.param(
datetime.datetime(2024, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc),
None,
id="ignored_date_midnight_jan1",
),
pytest.param(
datetime.datetime(2024, 1, 1, 10, 30, 0, tzinfo=datetime.timezone.utc),
None,
id="ignored_date_midday_jan1",
),
pytest.param(
datetime.datetime(2024, 12, 25, 15, 0, 0, tzinfo=datetime.timezone.utc),
None,
id="ignored_date_dec25_future",
),
# date.year <= 1900
pytest.param(
datetime.datetime(1899, 12, 31, tzinfo=datetime.timezone.utc),
None,
id="year_1899",
),
pytest.param(
datetime.datetime(1900, 1, 1, tzinfo=datetime.timezone.utc),
None,
id="year_1900_boundary",
),
# date is None
pytest.param(None, None, id="none_input"),
],
)
def test_filter_date_validation_rules(
self,
config_with_ignore_dates: DateParserConfig,
date: datetime.datetime | None,
expected_output: datetime.datetime | None,
) -> None:
"""Should correctly validate dates against various rules."""
parser = RegexDateParserPlugin(config_with_ignore_dates)
result = parser._filter_date(date)
assert result == expected_output
def test_filter_date_respects_ignore_dates(
self,
config_with_ignore_dates: DateParserConfig,
) -> None:
"""Should filter out dates in the ignore_dates set."""
parser = RegexDateParserPlugin(config_with_ignore_dates)
ignored_date = datetime.datetime(
2024,
1,
1,
12,
0,
tzinfo=datetime.timezone.utc,
)
another_ignored = datetime.datetime(
2024,
12,
25,
15,
30,
tzinfo=datetime.timezone.utc,
)
allowed_date = datetime.datetime(
2024,
1,
2,
12,
0,
tzinfo=datetime.timezone.utc,
)
assert parser._filter_date(ignored_date) is None
assert parser._filter_date(another_ignored) is None
assert parser._filter_date(allowed_date) == allowed_date
def test_filter_date_timezone_aware(
self,
regex_parser: RegexDateParserPlugin,
) -> None:
"""Should work with timezone-aware datetimes."""
date_utc = datetime.datetime(2024, 1, 10, 12, 0, tzinfo=datetime.timezone.utc)
result = regex_parser._filter_date(date_utc)
assert result is not None
assert result.tzinfo is not None
@pytest.mark.date_parsing
class TestRegexDateParser:
@pytest.mark.parametrize(
("filename", "content", "expected"),
[
pytest.param(
"report-2023-12-25.txt",
"Event recorded on 25/12/2022.",
[
datetime.datetime(2023, 12, 25, tzinfo=datetime.timezone.utc),
datetime.datetime(2022, 12, 25, tzinfo=datetime.timezone.utc),
],
id="filename-y-m-d_and_content-d-m-y",
),
pytest.param(
"img_2023.01.02.jpg",
"Taken on 01/02/2023",
[
datetime.datetime(2023, 1, 2, tzinfo=datetime.timezone.utc),
datetime.datetime(2023, 2, 1, tzinfo=datetime.timezone.utc),
],
id="ambiguous-dates-respect-orders",
),
pytest.param(
"notes.txt",
"bad date 99/99/9999 and 25/12/2022",
[
datetime.datetime(2022, 12, 25, tzinfo=datetime.timezone.utc),
],
id="parse-exception-skips-bad-and-yields-good",
),
],
)
def test_parse_returns_expected_dates(
self,
base_config: DateParserConfig,
mocker: pytest_mock.MockerFixture,
filename: str,
content: str,
expected: list[datetime.datetime],
) -> None:
"""
High-level tests that exercise RegexDateParser.parse only.
dateparser.parse is mocked so tests are deterministic.
"""
parser = RegexDateParserPlugin(base_config)
# Patch the dateparser.parse
target = "documents.plugins.date_parsing.base.dateparser.parse"
def fake_parse(
date_string: str,
settings: dict[str, Any] | None = None,
locales: None = None,
) -> datetime.datetime | None:
date_order = settings.get("DATE_ORDER") if settings else None
# Filename-style YYYY-MM-DD / YYYY.MM.DD
if (
"2023-12-25" in date_string
or "2023.12.25" in date_string
or "2023-12-25" in date_string
):
return datetime.datetime(2023, 12, 25, tzinfo=datetime.timezone.utc)
# content DMY 25/12/2022
if "25/12/2022" in date_string or "25-12-2022" in date_string:
return datetime.datetime(2022, 12, 25, tzinfo=datetime.timezone.utc)
# filename YMD 2023.01.02
if "2023.01.02" in date_string or "2023-01-02" in date_string:
return datetime.datetime(2023, 1, 2, tzinfo=datetime.timezone.utc)
# ambiguous 01/02/2023 -> respect DATE_ORDER setting
if "01/02/2023" in date_string:
if date_order == "DMY":
return datetime.datetime(2023, 2, 1, tzinfo=datetime.timezone.utc)
if date_order == "YMD":
return datetime.datetime(2023, 1, 2, tzinfo=datetime.timezone.utc)
# fallback
return datetime.datetime(2023, 2, 1, tzinfo=datetime.timezone.utc)
# simulate parse failure for malformed input
if "99/99/9999" in date_string or "bad date" in date_string:
raise Exception("parse failed for malformed date")
return None
mocker.patch(target, side_effect=fake_parse)
results = list(parser.parse(filename, content))
assert results == expected
for dt in results:
assert dt.tzinfo is not None
def test_parse_filters_future_and_ignored_dates(
self,
mocker: pytest_mock.MockerFixture,
) -> None:
"""
Ensure parser filters out:
- dates after reference_time
- dates whose .date() are in ignore_dates
"""
cfg = DateParserConfig(
languages=["en"],
timezone_str="UTC",
ignore_dates={datetime.date(2023, 12, 10)},
reference_time=datetime.datetime(
2024,
1,
15,
12,
0,
0,
tzinfo=datetime.timezone.utc,
),
filename_date_order="YMD",
content_date_order="DMY",
)
parser = RegexDateParserPlugin(cfg)
target = "documents.plugins.date_parsing.base.dateparser.parse"
def fake_parse(
date_string: str,
settings: dict[str, Any] | None = None,
locales: None = None,
) -> datetime.datetime | None:
if "10/12/2023" in date_string or "10-12-2023" in date_string:
# ignored date
return datetime.datetime(2023, 12, 10, tzinfo=datetime.timezone.utc)
if "01/02/2024" in date_string or "01-02-2024" in date_string:
# future relative to reference_time -> filtered
return datetime.datetime(2024, 2, 1, tzinfo=datetime.timezone.utc)
if "05/01/2023" in date_string or "05-01-2023" in date_string:
# valid
return datetime.datetime(2023, 1, 5, tzinfo=datetime.timezone.utc)
return None
mocker.patch(target, side_effect=fake_parse)
content = "Ignored: 10/12/2023, Future: 01/02/2024, Keep: 05/01/2023"
results = list(parser.parse("whatever.txt", content))
assert results == [datetime.datetime(2023, 1, 5, tzinfo=datetime.timezone.utc)]
def test_parse_handles_no_matches_and_returns_empty_list(
self,
base_config: DateParserConfig,
) -> None:
"""
When there are no matching date-like substrings, parse should yield nothing.
"""
parser = RegexDateParserPlugin(base_config)
results = list(
parser.parse("no-dates.txt", "this has no dates whatsoever"),
)
assert results == []
def test_parse_skips_filename_when_filename_date_order_none(
self,
mocker: pytest_mock.MockerFixture,
) -> None:
"""
When filename_date_order is None the parser must not attempt to parse the filename.
Only dates found in the content should be passed to dateparser.parse.
"""
cfg = DateParserConfig(
languages=["en"],
timezone_str="UTC",
ignore_dates=set(),
reference_time=datetime.datetime(
2024,
1,
15,
12,
0,
0,
tzinfo=datetime.timezone.utc,
),
filename_date_order=None,
content_date_order="DMY",
)
parser = RegexDateParserPlugin(cfg)
# Patch the module's dateparser.parse so we can inspect calls
target = "documents.plugins.date_parsing.base.dateparser.parse"
def fake_parse(
date_string: str,
settings: dict[str, Any] | None = None,
locales: None = None,
) -> datetime.datetime | None:
# return distinct datetimes so we can tell which source was parsed
if "25/12/2022" in date_string:
return datetime.datetime(2022, 12, 25, tzinfo=datetime.timezone.utc)
if "2023-12-25" in date_string:
return datetime.datetime(2023, 12, 25, tzinfo=datetime.timezone.utc)
return None
mock = mocker.patch(target, side_effect=fake_parse)
filename = "report-2023-12-25.txt"
content = "Event recorded on 25/12/2022."
results = list(parser.parse(filename, content))
# Only the content date should have been parsed -> one call
assert mock.call_count == 1
# # first call, first positional arg
called_date_string = mock.call_args_list[0][0][0]
assert "25/12/2022" in called_date_string
# And the parser should have yielded the corresponding datetime
assert results == [
datetime.datetime(2022, 12, 25, tzinfo=datetime.timezone.utc),
]

View File

@@ -0,0 +1,191 @@
%PDF-1.3
%“Œ‹ž ReportLab Generated PDF document (opensource)
1 0 obj
<<
/F1 2 0 R /F2 4 0 R
>>
endobj
2 0 obj
<<
/BaseFont /Helvetica /Encoding /WinAnsiEncoding /Name /F1 /Subtype /Type1 /Type /Font
>>
endobj
3 0 obj
<<
/Contents 15 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
4 0 obj
<<
/BaseFont /Helvetica-Bold /Encoding /WinAnsiEncoding /Name /F2 /Subtype /Type1 /Type /Font
>>
endobj
5 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3461 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M0bW:r$j4o4s3aL9.o/:sRKC1+V[Po_hnP="8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.EM=P;M`ictLh5:'u?=R'nd;IE]5Hh=BmqB2p^7X$0Q$9UiFPkD[m)hEDD7]3!20S(%m5Eepo,>73Ncpo[qg"0,Gt5J@p\hbEY.UOcVYbgK@oqO7DN/KYR@egT:O\Y"S^Nmne(=m!@M;\.mreAj`lssm2RjQmR*'f[]=0V/jtsN_^"C8&k'PptV(jd(Ymp-?-DiQUlg??aR5p7DE%a+(Q2+a1De[G>Bl&EKZ&,I(pUY]E@qJJG)r-?G9P(rih-1dREuNfk?>O(#o=aSKd[6HOfEV(Z'2t=fFn_3AbT-'E'3sZfj1^M@pYhQ7E1%B!q_i'CLMJZ]APP)MgR*7.Y/pg53RP?TA*/3L-50YH7,u"@RJ5[/9Q6C5NVbVGhM5l%_.?@umb=+S+0N]gQT<I'De%pX\0_kok!\7DNLBP"RS7[g'92lIB&8;Y13$pgHd09iRG2p8o0-ECM)-sFC\FmSgqH^TpYhQ7S=01ZZYsF;p79@=&(b@ObfogMI4I+_mo8Ft\0_l%B"lm`>FE$MV_[_Y246E[o=\bnb0967Q$FISai'U8mksuCAo?M*bkl?R-I0h_YM$B?F8J^DhM5l%EG"?[c+]I2gNP.=5$X;.1Gdp(p8uQo^/LHoiL3GZR2\raAeSG3ICLU;>is%i\_.+PGos32"IH[hA8X<AA_r2X1;RO>4IM[5E1-IZRS7[g)c,U.'3s[J\0_kok/NUqf`[Xe+0N]gQauWsDDo=BhM5l%_.@LHR@?oiRJ5[/9Q6C=:Zc7&>ipIE-50YH`fmsd"IFD+`\u,TrPk$]NL;edD'YO[I2buE1hPl,[ZP+_p2)p[e!QQPfLD$lgUH]`:1Im2@iJ!ODVrHt3K9FeNGTr/\U>Dmjtp]41q&NWk4WXSRF@Oke(@-QRG54@A56WH:1G57Ao?MGP<"Vj3K7l$RCR_b:ZaKGk$5CC.+u(L[aFc^qb6b_]O]p>fgaTjmPE\no9+M@B,b.F]?bTVcV*tKS8EA]mlo3K5;1^!EOO9f^ACUurOc[u`n<i5qsH8rp[aPr)eU*qn%6nfhp4shD4GHb^$e/6I6TC<[rJk(otL;sp\ha8ho=>=fDB;AhhFPj09^)KAJ38&9VV?L8MpH&M<8.ldJV05RX^_no.Ld9qHZg`b02a-m$D"%2.\6nf;,`[G2:]5WQ\V2c@4Gh=&YtOF%n^mA_13^REE`2l0OaBG;Wq]1Y8G/?Zt8UPc;l3PKnX1F]VM=136/NqdnAb9ps/J2<jIo?$A/;.Po\PZX7lf=5<1=SU;dP(A-q:-FpT?Fn1s1>L9Q0S)iGGeB)@_DF)%_Cm',a;^\2o]*8-oZUsS%9V$PXmM>H\bU0m00m3&T\6I=`1RmI^`mi+Cibh&sc>8Yj)cJ,VM7Wri3jVEGD+pLJ-LMZAlc^]d[kW$rRCHJJs-pTpHZ/#)PI^.2/_/Gu9ldRQT$2WWCT5#pBp+rKo47:$?VC&L8X%rrR4!(5rE?5)8Xe^PcTIWmmak?b:!t:GHfiH*GJBI/CQ^$TfeZFd^AG<;?^!=gc(929pYE$LqO43ODYD;<\aOu!e^l'@EjKDMb^K5$WP0]nZLJ^%HY#u61\3[enc_V2NHb$M)gp)%RGYQ;01^D,]VFZHi02I1r6C:L6.0i7*Bj-$T6+]-GAcILP+EW]kd`YIUbagAF!G%Ro\=[]cb7.BSXK;E)u5)]kJfT0mL;AEbfoP2a;6*b2r;r'Dt$>2Aq&o4^*)[NnW'2fK24Nao/eo%"\I%"GP'Z0I+"FNhmnk&i,4+8D*45U9lOt5(Y:H%gNYJ4S)E#I0<Sr*[ddmG2Slep?X1q4Cu`XmCk?Fi^UTlGfuB5df`]o]IW7MlZ]->RZO*cDrSi.cAfFP.AeSDgqSi-Obr20;bpKqYoS`%'Rr(9URn[j=kSMi,2qrR42k/aZcu)c8-TRTajWn-2h:0V>:?H.K8QTXcol?4Z\QM\UQ.esGSE+3uQBQEeG#L%A3LQAu,[ID*eB:EYk%6VF=)'\eEfuWs=\dD1g.f8NjCE.oPB<XE;_KLYR@E:`?)cZ0b=PIkAiWFaqb5fI-eXjPMG'4JbhVF=>A92cbB:e#8i1-tFRQ=g8G;/Vi_h'@1H2o><Z37\Ea<[a&ri:uh0UX]P'smD\5\=)b`2&(Pm5@E>ZY116t>@KpYJMpA7)Ji/leW#F/+)#V*VC?f+jW%d?qJl]slE4fpD#^99j27h!!U!Boq])FiC1L1hLXTfG0bKqW+XE:1[0q:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9j"6dh3D"AU^/LHoRCY]P]2P,]<+kV\Q$K"$)s"^pPrVEYk.Xc^pR/TYm^lDcP>l2_4-b)`W>jp44-_ftFlpD:RJ3,\612?`R?LT_mQ6\ZT;`dj^,qT?8Tj10;jmBJ\j>br;jihKBC7jHH(V&TjM!^@3D"AU^/LHoRCY]P]2P,]<+kV\Q$K"$)s"`VbpKshhJ;heIJ#,$L#d]nmrG`@m^r4^I;<3g8o>f_?gbP]CkDQP]k60U=20o&8FDiA/iT9X^3d':\+\@Uj;*pUjhAp_-FiO$C\FlYoddS,jF4Z.EjH)?]D%bBCL@$4DBZPtm^q7jK)=uLB&D<D^QMelm[*f'2k/a>H`u,3p=6A-(6\RV^<=bJ\F89ip8rc9/%LApI_"ofZO-'3pR6MG?i<T7+h:tJ]8a.RlgO*ANA"r%CuY<'3^MfLff,D1riT#Cpi?)Q-Eb+a'/[FnIC"drn*1%805'0Yiqg8J60$/A2k.>VY"m@=Eq[a)Y.q"N1qoK.Z\e#:l3*)"BA[ObqR\dSisd?'T6oD_R;-aleNSse-CL'A2.`f0WDraO2OS)NhURji-Dsc/e(A2o3I+\)VOF#I[81:r8`o)>9poa:.b-_B9dZ9lG;Ws3af/8:1cCb4:>XNcW@"N@mF0]uOu[eh;l6"R9!qH)P=aot>tp`%E[oU'ND1afPBSlqWl_5>q`ONacB7HTe^^)FjdQ)cmKTR7qbD9Vk'+?_^P9A:.ET;&?(LdsY0!m+DK&4Rmo3A$I[=j@CUb=RP3b9\eX>=VRf")l#,`aD:3C^AGI]'8L:b8NahC\ZSbZQoafjZ@E([G)<**^]QYZ/-\/Us$loWbJRG[+pr#4u-V^2.7F`lhj\L&UoOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>j''C?8XL9P~>endstream
endobj
6 0 obj
<<
/Contents 16 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.1ec20b3a96e40a35266a0a8a0d634adb 5 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
7 0 obj
<<
/Contents 17 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
8 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3500 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M0s\bV$q&G/J($uOju)%+%1&)hn*)-NT`"8nzzzzzzzzzzzz!!'grT65NupmP_`]63jRce!oT8TqIFGMi(@D>9Q18%Wp<?-h,WY=WoE>BeutHu8YIA4O7SpKc+sL9F0lZs.b3omCWORUeq#Fn]1ff7pJ#G-kItht;A6pmP_`]63jRce!oT8TqIF@N3&*FmnO#.ck97`6:E%D03I(`QWUU&i9D1[aFc>'f5%G8^-ObfLFJ><m7)c-S_r'@N/VA=YXu(T>\r;M/@@JB>r)?I1e@5,du+nSeX'Eoh!BoPLr@VHWJ@\f-`;Z:LY8Kmo_Ad?D#0[5)F,u]k>=.H$p;]q^?<O]sAA.1XE_eQ`;S-5/&?Y1Gd@ifpA]ho00l8'f.Yl]\/XO/+Ys=-5A<mcb.qtW[m[)^*XRN1XE_eQ`;S-5/&?Y1Gd@ifpA]ho00l8'f.Yl]\/XO/+Ys=-5A<mcb.qtW[m[)^*XRN1XE_eQ`;S-5/&?Y1Gd@ifpA]ho00l8'u%mtD9P\Mk\;?)Y=XF$F&s;:;^o<38E=PaiQL$,`lqD>Xu6pgRT0&;GI9.]Q(k==7(su_^<Bl"bY4ksC*SkE8VJg=<uWqo.D"5(jD.ZPbM:XfbZ'J&2A5hS<;84m[4sJ&U8s8A^*XT/b[#)09Vprf,E]0$KeILK)`(DA]%T^9CJs-7XQ[gn40.j^hT+6D_O"EQQ.^@^iQJlpY=XF$Z_AtVn#XBmGopCW$=@C6=(^>mKeN$]^*XT/b_iRI^9\/Rk'_VO.X[X!?($+R'u%ohpmP1W1+Tpkqp$[=RJ65/WUOJ"FCk0:<VS?<j(hQObH0pMloV9;A_tJZUr&I$d?WC/<oM67:LY8MbP\bnpIT2]CRMpqmllSFHnFsAk1qDiNNZpmg:[;.[dgcL?^l83`&>>qq.oTiPM!n,14O/tI1k<0>3<$5]2)lT?d&ATH1smHj(k't2X`iUD'W$A9g"p/H/<t\qlZj@Rs6j=o=XsBpK^R_2t:^YkBZgdm^o&GDrTG<ch$SRh02"nhScaWT'+q-]C1'g]SU874jU`9GMi(XGn\LNHCf>Qm_8!9o-U&'oK;S+h0mmRk"Rt-k]u$5])/Y.baWi8dIY"AkBe/\3oGMcAUk_L);rMA#.X2i!H.gHJ/`tUi5T+.\FGmdDZ"(U:O*o%bOL")-=7^7DrE74n%8HFqc1)_qsI.l2X9/9=Xr<QpJLXbCr,l%R=&l$]nNdl^@1KblrVkln%1COg8K?+B;p:9h+-/%Z3B-0BC`H-pD2%Pq7aJ%Z<q/N^@0A.CSU;LS>Ge)G9:D2aqfB^S]TJQh-2j3jnnI0b'oU-pqAhRYDp-&E0eZ@h0kOd.U2CjG:$Z9F`64iQ1)?^./R#Qi;;q9^,G95_HAAGGP@N9hjJ)bTkmPnBP&2>q6mNRbVk[p.ML'C@j^(Kp6jTgZ9`&rR;L1/gVQ-1gJBf,9Jj)8R=&5kB4`+*#*k$W[P<ta$iA.a6eS+fdEFL\nnhg-R;F>k<$n'e`_=)ulnbsWAV8,n1Y\;=[tT6B[\7M6R:p1O1\nJ`cce;3%4W%9Cu];YgRj<\hRi@\^S#q;\'`3BG@'2DFDp_.g3E)3$iGVE:#8>Yn(i8??dQL.gM#W\4"p(2\i4mRD7k)U"b&c3-?#Z=p[5]00Bh9RD7&iiSJV&)h4)':2Vu(;!l(CTPIJrZHZrfS**lhrDLp#UDVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]dt()PtR=ZX#Ne^?[k]n7>Yqk"X@5,N$b[n+t<ZI$k_`GnY>faEOuZ]=tTY?Y5"1hF(X2o%i[0Y4&I/QW`::2c81eHoLr:lT;0:AQJTg:"6Qqhp&n(qT^R<R2*G]'6W]`GI-bM^9\/RAqb0[6sVnFh<b$An#XBm=lGi/;:ghU2uC>T40.j^<qtfOe?pOYc+`ZCc7440'u"rJJT(G.c#rLLN&!'U(Z28lDQ`kTc7&8cJ+:35jlX/Sk);&Kn/'u_;f8c8DpBd&!e9aR3p#M8s5o7q0CTe8X&Eo=qesb.o)aF3]fP9;])UoO1,&,5hlB[nY5<._..[Lin\%!Fk.:TTN&!'U(Z28lDQ`kTc7&;D]rq>1..d;(9\b4QZdNohWf>5rbj0%"E=9M)9$`?o2DU%CYHQ'd/bh(O4X[8`a;i@8^*XN&i6/4oS>^0IF"$YVRS;Lg0=0)JU8j3sU!2h<13!]9bY$3<W\uVf19[n'`%Ca>.m58[g;k8V]Y5^+\)>H2oUMjp,BG:)qO1+5JhOIYF/#[obb<8HCGKl;^<B3qM%\S,H5f%lH95tRk08_qgJBXiTC$Zs\'m6IhOH"!%41W;fe.Jp4)JKic&!(f:bk8-m;f,6dl(gpS1(WO-1g`]/pDV'D.D_QM%\Q>1-_DuEi6Cq2J1g9.'X4-oCLWfGBu>fA*2$m'&-5<5G.=`Vmk,5B&9%+Ymi#No@Ya?H95tRk08_qgJBXiTC$Zs\'m6IhOI,Nj6S(_kfW7]G@i>d]6GuK^GF1_VGb-dpCd3^o5%kcjh#ajEPF<U-Dj\TMt[kY47d8t.cn9e06+`_cR,Me^5M^upH.t_@OgKOGV='O1X@DF;SJ(`')+KZCgnmU]6GuK^GF1_VGb-dpCd5$SD?,d0=+!u_ELRn>rts0m[M:a=eTY?+/Q$@*@YXq:#sL!:q!ThdT+nZPdC66nmtiM>M)I1WbY,IfmOP01+SS@m%\[Q[3Of"^576*(!7<c;7c&HO`GX&7)$kPAIJA`?$5O*3P02R?Y5"tKmf2g\osm>h)CHLZU3?^5"\m^4&XAlS&gq!Tkn-ZV5pa>F_*aPG"-*$7)!s@;;.u'G3*prREq=mOkD[UDr,o,2X7_Vq-@@iZY!i\p.aV;G9<Z@\ntMtf9c<7fbp3+'D^eH7qn`9gQg[hANjmQ7V:OG^3THMg8NbLj`c-@c^LDeff,%3hL1VHlF(!o?!la#AnPZJ:#qdf+/Ot.D-)2<Qhd`9)4>mdq<$L'BqoS#Q/D7G5&5=2B&?"jH1t1iW7uLWGC>n*R[oSo2j&%8I1k:217u7s3aHUt;K^6R.'Y9Ko@YXqe('1+<S+m?'"_$sT=r*&?#B@7Fj6C(Yq%-lfdj/QeV5_Wf=ZqQ]2CDV]tug9D>7"Oc'p,d.jaf?/$.4ML+cQY]SR95;DOlX_E(t>pel7ZRjbNl-1fe?XOG^S03-W:M%[Eu17u7s3aHUt;K^6R.'Y9Ko@YXqe('1+<S+m?PTA#EbWE/3Y5NP"]T:Lk9Zk"(]B\*gf?O1@?-T1h40tujrH@#0O4)QPb.KOBlIp1.c2/npc(rQFZ`C8-G29flda_%6]JI1bg2GTfq^>apUs(p,X02DEh7SfseP+,u1V;r+DqE82-sb)nbWE/3Y5NP"]T:Lk9Zk"(]B\*gf?F9qzzzzzzzzzzzz5k,pqe-_`~>endstream
endobj
9 0 obj
<<
/Contents 18 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.1280b7d13f0587f75dbba24117c3e12a 8 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
10 0 obj
<<
/Contents 19 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
11 0 obj
<<
/Contents 20 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
12 0 obj
<<
/PageMode /UseNone /Pages 14 0 R /Type /Catalog
>>
endobj
13 0 obj
<<
/Author (anonymous) /CreationDate (D:20251216142815+00'00') /Creator (anonymous) /Keywords () /ModDate (D:20251216142815+00'00') /Producer (ReportLab PDF Library - \(opensource\))
/Subject (unspecified) /Title (untitled) /Trapped /False
>>
endobj
14 0 obj
<<
/Count 6 /Kids [ 3 0 R 6 0 R 7 0 R 9 0 R 10 0 R 11 0 R ] /Type /Pages
>>
endobj
15 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 180
>>
stream
GarW2_$YcZ&4#]5`B.*sm?lG7-r1F@qZF9DA`.IipA]su.E!o]Gjlb!W-Y8MVLYd\SdH+*Db)WVcj08*lF:SRr1h[EQATgu\mTX&mM6\8TBJjRrY+[@;2SB>n%<o^ecds*8b94pS?3+.GJiRY?$tJ'kE;+M?S1+H*dP`*I]"P+I/q7'3Jd~>endstream
endobj
16 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 247
>>
stream
GarW49oHkR%#46B/)I%IW6)D&Ib[t)(Vt`^F7>Vna$l[3>H5Gfa')=U#ta'/k95LEWOSG3"r7^/'n\C!!F0.=g]gEL*]\ARI\tm.$t5=QeJN,a-U84$:2lt0pl4os1D+HLYKtgFoDtT3?g;TJZ"p1Ms8-mnFZu9hhtp+9>=dda($9HW?h<;`raPb)D!6nUI04P3H\NHB@T2`*R;q_bb$8O:'I:6:edBMuW3U[l<LGI>h[8WZ3d+9:~>endstream
endobj
17 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 174
>>
stream
GarW05mkIo$q9nR`F#VQ:"lUo>6R=/74qEVW"iqLlKJ$aC5XZ78-0=Y@o+._YHbl4Yh1ZiKj;Fh4GOo\Q)*Tmo(5G_$S2VM\7.0Q<EDUIL"miXU&6:llV)t3kO76]f1&V`o<RW_$L^=Zj5S/(^?3g%MR/&;IQVW!7L'9`%ko%``W~>endstream
endobj
18 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 244
>>
stream
GarW4b6l*?&4Q?hMRuh(23Tpmkt^cRET&,M[qV:g6EQRMs2I6!U^T$](X?G+!r#.NZmDs+Qu<6UTF4R17*n#s2&gSmk2Mk.09@0r[k9DgRu9WjKt]k!Ic1n'J.q+%HiE61]07.7.f1^O]tp[&Fn>&8hUMD]+;spqhR>>LX`Bap0=GTNaa\,&7Zqt_h]FP:T4bD(VQ.g\Pm&EGSuJ6sK$-Os9'08g6q!hekXZP?.6B5f2_i<IDu~>endstream
endobj
19 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 177
>>
stream
GarW05mkI_&4Q=V`F#VQSZ/`o[HbngaBTj`WC0EX,i7E\>AQT[REn!?T0[2$]@m:4_<Sts*7B@qAH=gO.+uD8j8"aV?VhBDBuOM(;)a,J%_4qBA,/pbTJI0<]mhonfJ__#oZJ*o;$A%?:)bSCqJ5Nag.!$oHGKA(4Iu=r,7iHH@ok]?~>endstream
endobj
20 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 167
>>
stream
GarWpYmu@>'Lq&PV`:G4BJ1Rt7[P0S\M$3?0c4(6nGPe/#`385LHkTp@/;7gD`sVT>qS[,<_0MjcjQ72dr'n'riIp[%YeJCl<.DN]-CUV%s0VJJ"dSm@n9+>F4SmfNcSuChM%!&%Rn8_]8SB3ren*ZdT2U-SH5HD>?EhY~>endstream
endobj
xref
0 21
0000000000 65535 f
0000000061 00000 n
0000000102 00000 n
0000000209 00000 n
0000000404 00000 n
0000000516 00000 n
0000004168 00000 n
0000004426 00000 n
0000004621 00000 n
0000008312 00000 n
0000008570 00000 n
0000008766 00000 n
0000008962 00000 n
0000009032 00000 n
0000009294 00000 n
0000009386 00000 n
0000009657 00000 n
0000009995 00000 n
0000010260 00000 n
0000010595 00000 n
0000010863 00000 n
trailer
<<
/ID
[<93a746516153ebd4bfbd42147dac7019><93a746516153ebd4bfbd42147dac7019>]
% ReportLab generated PDF document -- digest (opensource)
/Info 13 0 R
/Root 12 0 R
/Size 21
>>
startxref
11121
%%EOF

View File

@@ -0,0 +1,251 @@
%PDF-1.3
%“Œ‹ž ReportLab Generated PDF document (opensource)
1 0 obj
<<
/F1 2 0 R /F2 3 0 R
>>
endobj
2 0 obj
<<
/BaseFont /Helvetica /Encoding /WinAnsiEncoding /Name /F1 /Subtype /Type1 /Type /Font
>>
endobj
3 0 obj
<<
/BaseFont /Helvetica-Bold /Encoding /WinAnsiEncoding /Name /F2 /Subtype /Type1 /Type /Font
>>
endobj
4 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3565 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M0s\bV$q&G/J(%!d.nqi3%"dr>?4mOts8EB+zzzzzzzzzzzz!*oM+-e25^Cnl29l&b?1]i8(4Z]3i/UdM`^pJP:nc8L!XP7Hikhb/*W3nrlS:-,3JIP95Ol>35>46_jpqn5s1WO%T@mA*+\n+T^cV9UuF\!%<d^>DS+Prd[`h+g#,qgL3K<U+CqG<>&Nmea[s=/iNchX.++@iJ&&?TXb*R+\CBNRSd!GW)BQZVP'MW@t^MCeA2LT>ice?YncUPfCp2NGYK*fZ(6HNL>25gIQatNFu0.1]'#;OnrN+c_`O2p1\=fou/h\9khi<e(@.TMetSGd[8^cZ&R6LgNYWQ0scpZqhe?bhn8-AB4gkNX0g9qf;'UUff/Bqk*_Wcmj`02nhmC;AbOAq^3O=&hn)K"NI4GM;OMt)So:3gg%VYt7O)9(MH"3F]]YbsB(ip=[*ctHU`u)W2t-S)R@/Kl.W%$525s,Xp<@ruH"L[G]1O$%Hqc(gR=4[uDk<BMpTDq6,@?Xl<+b%#kF6*rp:[N6dE'q8,*gh?n+l+6R=4NRD)Cf@WUOIo8^2V6q[HopI=+a=I$mo2oA9.h_j>Wj?]th(k*4B.GJ2o'A_tJZUr&HY,rP-sml,W^p[L,?n7>?dc$S(>%:A7GHqc)<R8[#ThXu?8Q['Z[P6t[ZPgm?i^#/<jhT+6D_O!8/2=Fi>2gh%fnMR<8APPhAHe8-(?flca9e(OY;c<5DomXfFGopCWMT3kt`d*!#p[L,?B[!PkSCa*Q+((4h`ls=[H$u+TnMR:ZbcGKJf9^b)?^$Aeb_jDcI@9j"25t95XD"-UQ['Z[PD[B(MT0piRJ65/WG'MWBUm\@GopCWMT3kt`d*!#p[L,?B[!PkSCa*QT>id:gK2Uh2l/78:Lb>9k4;`pb^!,*1]'#;Tk$?Z<QpslV(o;P@iJ&&o3'&VGuGV;q_YaTA3QVcba^Vg0saYc)bT8a=iJU1;]7Ih1q')g5+bj!S2[9!ZVP'M9'5_J`QWU+)Dh7I[\74qT2j+*2l/78:Lb>9k4;`pb^!,*1]+i^b,/"tD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4RYJu^>DlOD*6S:4>5EnIDCiW$uk5["5N[p!&sm4^l?);E8rSRh"]%m2uYSJNP>nZ4+DCq2sL42I@5NP.]F?D<I3LL[A0.XIIAYb)W+P@hC#_+R@/Kl.]F?D<I3LL[A0.XIIAYb)W+P@hC#_+R@/Kl.]F?D<I3LL[A0.XIIAYb)W+P@hC#_+R@/Kl.]F?D<I3LL[A0.XIIAYb)W+P@hC#_+R@/Kl.]F?D<I3LL[A0.XIIAYb)W+P@hC#]7`QV:oW@c?\lC;T2V2]COdZWlH[=t#SSQ"pN[.2C1>#Lof[P5%gF/jYQ2SRj89<,>J(1q`4::6/2`]"iN8Tg?Q_T]1.WV03Cpgta&WQnCGR+^"MeNQSEhe])7eQHVp1V5]Gg.,l;RuOeN3_O<uAln.pq^?</Z]UPQB;mK,m-:*U+/Os82U[cR/@C2"?(Fp]j6J1;F(6\,k*Zn]Rad@F2`IW)AY(p0Y=\-&GB77D?fmairL25rHoF!]nn"CQdB?./aEM]8.UV>+'E+BLNdNcEVXe;i3RP$%mDsKm@k(jJR1oOfe;eo82l?.WSiP3%8aXTCF(6\,k*Zn]Rad@F2`IW)AY(morkEmYot@DJjn#L]gNe59mc3'j^2HC'b3)cC\+BIQh8d"^IG]e-Q-ZMfDWg]+]8pmFqn<N9/"]nVh9P>5G9>^kpO*uQ=$Eg6]:IRJmQSA`me\o,Y'aRKFuunrh0j\Jh=kb7?2;.um<2gn]DMBs]C4KN](1-sg[2YfGPCXpGMfm'FQN+o\(c1Vn*]5jn%N^-l,r0hDn#?7hJLcP6!rcB2`JI/^Lm3;l:\Y34S.^TGOa;/SmQmP\!^mch"WN$Rb@;50B"h\baTR/`EtEJIe>#BAtF9hj1G)%B$[ZpK\O^,Df0q=J'I?UeX(6EH03A2n)CO=4hph+DDTVO\a`u'27)II?c$[BR5[(=Mn`ltr9?qbbrkU[`cn]Bb\tDn[ATgh_uqm2NGlQ/0;Y,U">dfPDnc-&V4,=*1V\of^AKX\R`#=b^Q!0Gr%Fn8_uqm2NGlQ/0;Y,U">dfPDnc-&V4,=*1V\of^AKX\R`#=b^Q!0Gr%Fn8_uqm2NGlQ/0;Y,-.D:`8P^-25mE:I'Hs9-=k"U!+0/NUuD';?92q?HmbkTAjKbS+T<*/rPj"K:*;NuH*?1#go^O06V;<CSq[4a?8nnfO=R5`[H^<HAq[tPN-hGdSR1?1%F(/gBB;dJe6N5'aE85\MDFgWkYrb?8tQ[.^pk=_($qeI+^nmt#BSEe7`.WtI:1j8!(I(hVkPqQ/&m&t-ib*VpOF/g`tWmo^/e(Bp=oD.NJ=2%Y5ZbNBB.pqH1c)IjnWD[AY9k.8=bibHo?27BpcMj9PY'cY`2lF7XV+R<W22to;17rJ[Flc8]3r<\2]CKMshjiuSQR[.OfD'YmpbL1.=jd#IdUXS0P6t[ZR5ZW7hc6+]jD.ZPb<5B.7H;!PPDsra.X[XshQ1C<=)"t&Y=XF$o?<[=NUJ>k2Y&`)n7A2:Z_HtqV9)3@-5F8=1N-;N^8j*DaOd6)Q[.`<MT0k*-6)cL<;85p]NNYWY0pi*?]th(k*\.Y)W+P`CufA0YqP^m<5,]]c24;_p:WODo?:1JZc^PiUjDaJ4#?KP<5,]]c24;_p:WODo?:1JZc^PiUjDaJ4#?KP<5,]]c24;_p:WODo?:1JZc^PiUjDaJ4#?KP<5,]]c24;_p:WODo?:1JZc^PiUjDaJ4#?KP<5,]]c24;_p:WODo?:1JZc^PiUjDaJ40oCXXMU;aF`SebNYK4OWA^!I$=DrFc&I*4;YW#Fg/-'EI$mnl2tH?,iQIa+6TM4:APV(:aL<CJ06=-&jfH=c3H]Bfhi8Dbbc*LbZO1(%eBNu#4&Yi/VqOtm.X[X_2`L<p8E%a^C-t"^n7>ANgrQrLKeIMI%HX\.QT920,N"Lq^<EQ7Pd:^(k1%Y8)V[f2=6HoIGMTp%\+/:B)V\(Dqp$\JFn9Y1`cl'1CdZ>240*VXc-(AOhjpaO%4/@LXQ!ME0B[2"GG`A<Pg$kJMKdmhS@A%ER2XgV1Af?HhKAF1hSsbPC2?8ZloV7mf?MKa)rI3cYmi"cn#YMS2_X?2I!;I-2O)G"Z\it_]UZJ$h3f29:bqpn.c,=>Eo+f[B6s<K9VK]-:$6Fn]kd&]2pL]Ng6%FOml-fP:%b["45Vm$6Y]+Tb.QVRr\i<=k3dWGbRUTmq'bi`e+q`tn!^l[9f"l,G3#FBqb?6l;nZu7c8X#7OP5hX1@I/m^J4C(\NA\*ZU8It50COqFT+2K]B8O/e'&<$*Vh,TIae,I<:?9fo1_#&Uitgobkk,tDc=>bm`mc9m#]q`O#$/_S\InlY.6XO1]m^XzzzzzzzzzzzzDZ'feLL1)~>endstream
endobj
5 0 obj
<<
/Contents 19 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.969a0a278dab8164403e924542d002c7 4 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
6 0 obj
<<
/Contents 20 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
7 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3730 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M_/\Be$q&G0^ZfP<ejT+dJ>D]&P!ICnq2'o`LkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkCLkpkC#aKo*\@Q^OSZ$ER1V!=bkN^34G+uR=bi%p$SuXBnfno!;15!4-HEODUB.'_6aUIUOlaaTISXl]s-Sr:0Y35)mF`sLjS]p$[]:Q&Rf@c#GFu"]^ms;.>>\>FagW9ll^?`>@Eh/c"DC76Mql],Ib&qm%\t.&!Z&R?j^GbOE'AIa`o5!sc1U2qk1;>a94ql[q/b<cLmPD,-f.t*!mlhE>GV/N&oBnE>NRor#2X`gCgNY=AC\-sMZ&R?j^GbOE'AIa`o5!scUY@ukb,dZtkV<g_bkpUm,[?a[B*hjU6C2s>?'BplZgKB<b=q+Tn"XZ]E+RUk-$5O]?06QW9bOVQ]!I&B1SN")O02'kQ!&<sct+PGRJ5)d8[pPAc4[_4KJ)j[\g@_bB&-TVQ^Q33hFH0CiQA5`9B\,E]$1'8RNu.+F_;"dAjVi0+'k(`0!gIOhRhW_kVDV%CQd%UG&bcRfD?"_k'_>G'"Z:>Y'KJ^^S/i`O02&@QeB>8mp5B@FIDmN2+d!e]=@":lhuPjo00`4$!h-Zf=Z=@hp>rk8(T#[bO"5WpTF60]$1'8RO!!C?!R!XomT8pGop=UKJ'ThCYl\[Ds/tFUMH%>ATi(<He8-(?06QW9bP%]4ko1gb*@5o?'BpoF3cajVK-8)8(T#[bM9F`SuGt_pX(iikVDmKULu&Xk'_>G'"Z:MgZY'JQ$FHWPCtQ0cSg*HEb4)tf-I'5gp<!3WEbkcd5=MZ1A>U[B.@!FI<80r]$*=lkaBSYGop=UKJ%>l2L;6@A_q'/)nO7;hq=!\oBK#nX>@*jfCqu2S6'?4At=cq\@uUMkI`hh1[@_N_t?/+4m1@`Qs"'2)m8XtkW,il`6:7UgMhSe7*g.$->(hsfS8^=-@3t<1GQ5]`&:lX1XU8MSV`bW-FVn0Sr''1l:fF'1U.D@pIt4L=(buSS\"9g$Sq1hZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmh8e>q4YlKEZP/nmNTnOXoArC<7fK\![JXSJCAH$seFJqpW9#[j:s32]TpEGE67m[iKNeA\pj;0WLN'q"ON]k7-=$$-j4_68?%QVTfe_T5G9>H1FmfqgF3gEaf=Upi[]b%E+IRTM9Xrs8a0g?O]*-72ZY./JmQRf@leWpYk+4^LY'9V[D.!#i5r/3$R;fjOO)+X(G6sGCB?u7sh0iP_f=ah=bWU;">jq+@g:lu\Jn4<&1Z<Y(+4=]G/_kcL2orVBZar&"RJ5CdoA,aTK=n6HX)#V<H(TLJhgJ'TaLf$i\i*f]pY"0ih3W59]W4,pCVuE_T7&l[I7ZB=]=P>[2I)[Eg"5iOQZRkmX#m@t_j,M>I6HJ98N4`&]!Jt+>ab?>fCRgbFG%6,40*/7Q$K!M4l:ZZ2orVBZar&"RJ5CdoA,aTK=n6HX)#V<GotiXFRPAlY+&^0HTFs0O/o=]IGLc@h<[2Wds7[9>[:@slSb$oB%P_-c4fU*lDeT+\#EUk]?^+lSnK3I=&o./:$&t>Fge4>7/OL*WnQ'A.bDqJZTXj,g;h`De\5p"\ScS]mHHa\5-D&gp2ukIFgLCM:ZmT.EHP9hYM3$ZRpBZR3`!,DX/i7Kh555KloNnMH(o^X)nIACmca(3]&?8XH^EO#&*`Ae%A$K1Y#MuVaj'^EY$Tl@3>LLFV0eQa]FX8l[1/^+&[#*EGKJ,,OKMpEi&MP28u6m$gq40^lEq(m;\?03\p)9@qiIsegYb(,iC]qnQe]4al0>YcSl66(<4[:qc23U*?JFBu(#_c]H^EO#&*`Ae%A$K1Y#MuVaj'_TgceTc9^P6Z<$X7[40]52>oU?qkq_QOd\S.Uak;Mf\EMYnm7<RH2j#^-a\9uTCr8]nL2KMmbdrXQWCUWAG%)CC\i+XldUh!(Ue=-3PHtqWE7)(egQ=+pDB`@9P*qi2fS5<e%Gd%dR<92-;N]2am)1eeFD^5bUs#j/8=,3E->+^7iM(*T[j,0jh*ehR,VqGBYmhLT)nI'S1@$:8V';>LfnjRahf+!d4Yr:$&RTWeXR:GdAL=B#^?E:5h=n"o9N"<,d=&:2moc!FUl+rAqdO^*ZS"%*(%F`e`Tk0M?"T6Lif!5_eU>cTbTp3,(Yon.qjqqu8ZIS"\h@+Y4mOke<"WJZ?``[Ben$2-iqLN\dhA/$Fk1]AleV0Wqc%F.%l1?SXQsKM]B>[XWS0lbA7!kjf?LIBcSt;k[?3Yo]WSTDRpGKSDr.C,,CPZ!?"[G%]A>^.]MI'IqB5C@cFRn:]mIJ=T;mnC60<<\FkuD1pO)Qr<?(1Tbj.=U]%QcEq<!+*2UVhuf6,YeZg6H=c21IWU^o5al`BDaf=U48pR0Z'cYk%&p7KrJ@TU-Z6[*V-oT`aPf4ADm\o0r[F5?JR?CTPX@TU-Z6[*V-oT`aPf4ADm\o0r[F5?JR?CTPX@TU-Z6[*V-oT`aPf4ADm\o0r[F5?JR?CTPX@TU-Z6[*V-oT`aPf4ADm\o0r[F5?JR?CTPX@TU-Z6[*V-oT`aPf1":I0ekIU7CY+bk+,&p1GLU?8O^-E]B5YTf/#-91J($F1:u\>8CoXP0f)Bi\2(\YSOh?g0ekIU7CY+bk+,&p1GLU?8O^-E]B5YTf/#-91J($F1:u\>8CoXP0f)Bi\2(\YSOh?g0ekIU7CY+bk+,&p1GLU?8O^-E]B5YTf/',Dm$``@G,6D((A8nW\TUBd\gAVh?)\+8KmcKnXrWY2Y),\]="8f<@3<p_CIdHeCEeqpQ&6\gfS=S1g(?@2RYYKtV'<trp*!_PG!sWe-6ej`W<fe`I7RGB*Kd\2M+D5[.Y;JF+4?Ei8*$ae,*LWEMBH-ULNMY3&NT0GLh(\*,01Pn+^MX%7*`@j,+2`9#dT/486;r7\MCd7meJJ,V0f[!dK\C.mTj+1f?HJ9-)l"Tp-dUWqpiB\bNugNkIW")mbOI*)e?^pGBK<Z2tq]dUscs2'5<*c=8/jMZg6bFmugY]GC(NNOT"J$eR`p[h"+g79)!4!U^aV:h7L8BY+'dP92b$2m>829p8`OAQe?M&cZ_r1h7Gh42TU>in*2[?DXR<Q8YpdC-di4PYO5\%B%P9jh^,7Dn+JTC%?fCe<l^K*B,a!.m##=sS^h_VlIeX&'>#HV?"\:bWHu##e&8C[B%L0+=/qj/oJEg'P1atYa-EZ;)a-p^MI^n$eT%GNfg\aN]-<YKB6@IYHFD5M`3)XCf3'+qWh+)"1pJQofl^?9Q)\I94\b:#A9ONoEM_sR7Si3M-?@@(hQ>OFjh.acpE<i8jI2FJ>IVr$H!NAt6b;R)2(s6aDP*3c\o-A[FOcRj?Li!i4tGf=h7O9_W[HE2jP@Cq<mL$BGotI7:Teub]64"PSC_oT5,>6N>k=,&lh4IAYNt>!867sbjsBKImB^h>f?O2SY'P-59AZf*BB?e/S+e(3p#MR.]N_kkb'@8O7moa56psF26psF26psF26psF26psF26psF26psF26psF26psF26psF26psF26psF26psF26psF26psF26psEGnK3us@Di~>endstream
endobj
8 0 obj
<<
/Contents 21 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.f6e745526a61cd8416256ca62679b160 7 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
9 0 obj
<<
/Contents 22 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
10 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3461 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M5n_s,&44e"s3f#4<j$Hd!m(E/-.0*upFcLHzzzzzzzzzzzzE;=RG<*-'*MpC^#6b;aX;I--E/ipJ*%I(UQ96HL\]BJ$F2Pg460@a(fm%SP_foYY!]Mqt3Z]!,rA4*rso28kjc7V/eP7?cjcV&DG3kOV2:,\pF4tkGdlKk6d45#_`qn,m0WO%N.m:8Sqn+0F_V9U\S\!%8ZD((geFuEndZd3)"W@rGbCs$4!T>`]d?gQe*Pf:j1Mf#9(fYt0GMj\u/gIQatMe>s(1]&`3OnrN'c_`7*p/u2Vou/PT9j,^,Ze.b4Md8H7ZC'<XZ&R6L]6H5F0sf28)lYK6nSH?WAfoq('rfEkH(T$Cbj9[XBk*:/8OY,8Dr0ZH4#?cj<SCR9CJp;;m$mbFk0D2bGEk'%q3k9Y1Ag3^^:?&+mp2cL7D:)`WQ`)%T5X/Am!DfJA!*rd74PW^hWL@i1]-$/g0!EO;N],gS+la@cHO"HPr9ERRT-(<qrW=@\MLS\?)<NL^*XSobeA@pDskj9HWOokOfq>F.^O'MkF4Ohqqb?,=QF6qiQIaSReL+VDtl*Fa4I-(QT6?Ln%7H6n7B=pY=XF$F%U3(GJ1cMbBtC=P6t[ZPgmA7H@Ni!q*)M>'u%m@WHmCore<)tI=+a=I$mo2o=jPh%:A8ppmP1W18gr1IFn?8^<H?A26$(V:.`_-Y4jl&RIB0O;W5fhP+),Nmr.tVhoF?E_O!8/2=J06hsb3kO0;-o@V3U=]Y;VanMR:JbcFp:f9c:s^*XSQR2(YOqD-sKf;)k1ZXRo\?fq7\RT(hM<\2P^bBtC=PD[?'MT'jhRIB0O;R)TlYrJp?o6l!\.Mafj(?b@%^5oHh`F<Guik3"Lp[L,?=NmjKSCa,'5)X:Q[]enb2e=_McXIhck3'(.c?W>(1]&`3Tk&U9X6?'4AMLO;@N.r%kHg&kGuGJ7q^f1LA:?_CkaXSC0sf28)bT8UZ\m%R;\h1d/@M6_IQjgrS2Y"6Zd3)"9'5<CMU$KG(,Ph=[\75\5-*n;2e=_McXIhck3'(.c?W>(1]*^>b$$4b/tQA@4R[^HI@mF]]6JMHo=XfhpO/Vgn)E"hch$!@h=l3;]SgD=4jSJIFPl71o.HV?oK;SjdX@acbe&+CdIY$F8b[Y)1@O5U7F1S^-f+NBb.NXF(Hp)kT6/uR.J#0`?1^8JIF<P9Vp)btG1#oo=NC+92uYSJNP>m@j"CMipr`Xrr\D9K^C5)A0*C#)b5aL#1AlXJ?_-a^qS$TLB&C3mWl91IT'qls;.jLu]1T*BGEncgPM/lHB&C3mWl91IT'qls;.jLu]1T*BGEncgPM/lHB&C3mWl91IT'qls;.jLu]1T*BGEncgPM/lHB&C3mWl91IT'qls;.jLu]1T*BGEncgPM/lHB&C3mWl91IT'qls;.jLu]1T*BGEncgPM2/fXD_E6B349YHfs51pA)9lSb:7.kj7TiI.+`&ffibEWO'<a'qJX$S*unl458ra.Ws<$YG/A(F7[*:DJ/io?)>jCk#HMHF`+p;hX,-OYH-/Uhn4ilY0P;dcF]Y7o6$<Zl^Lu'g"!Y/hKcT:ID2j^B79A$3kIF3h`L:M[a3Q*lE"o17;#]bk'Q:IcM%(F8tGY"X6Ap8gjJS'gY(sT\'TE,nnr(ihaI8$gTRp!cC:)C4&br%_CseqPmqonXM!;Uau9o]q=^k*Yk)Jgh9K!6M=A]t?Y+k\2_4??VXdH7?bZIg)sk4?H+,SpD^,,Mm8qtKkKV/]SZ-uMiKnJtap'qG<o34;jdQOjI/?o%fQjgoDJ(HV7/5luYHl!iS!A1Z;u%cWYPWL-9(i+"8nq\i]8MoV3d^;ml;COOqVGMRFOf&ck?""5Rb.<%,,M?cSia]uPgm@[G8D]6FnXVdeYT)(ot@t.l)M)QcEAiH26ZK)77pXP4a2?t.B.T@mk.MLlKl(QX%Q%/l[*i<dT'r+SR5YpC0of0Mj1A*Hg^gs;H!,_hYe@"ccXN$^;=qMqbf#[Mc5"-:i>uT3N5u9n)>&=(6k;:?VW*$ik%1h[ftl*hj9O![J2@LnSLS&g%XU(;D#V@;g_eGDYAK=WDnfEiY:8cPad).:+FfR]/[d="m'@RQ]k%Lpb#>3D*b_cpZ82K/\[-AHs9Ebp3es#</^5)DOk.[WGfD(Y:K82r,PP\gHBpiTjQuLF6_NJiTgjL3kq1`hf]N;=6S93jdc1UHhb0=^WnLqe@48Of+n.)IQ==?m2j!Ed'*&a3V@7`n9'G6SR9[ADnCe./+i0*E__&;4a^Ne8oN`PA_pBfpLV>th\K+SGuJ#)QPmPc>=_WR9m(ScB)M4:+[qeXe^])9n]SCq'P6-#=8-:G0AdOCP^-25h3T"oHs9-=k"Tu@?Y96,D';?92q?H,RISJ0KbS+T<*/g*`\63b;NuH*?/:hb^D<=2Aa]K]I&U"V^Lj`$?fpO@;dK2sV9XP7@TU:fcSqc5ntesLrL1^.I$pUF:3GYkPree"Md?-65Mr!rb*OT#p1kGUnn!9a3RsBJ<U/r&/9^-ur-WRc.po=+frnQFb*Vr"k#>Er<2m32Ze1NroCL5+Y'd3HB>8&`.pqH%Ro<Vg;PhZfNNZuK0=,SkYBACkDC=<G'u%o+M8%P@kI[:4)cZSfq^2/C=.Sf'nnh4^bNt_^CL)(sgW?ojKeKcNU9A^iq/.S;"r4kr^59ST<TgubHlH&[1A`Ep>$%9`G4KKHiQLSWk\FIH^>$c=6%ptuDkaFXebO)1hZah0beAqJelCnFSU;AUUb_hm.Q;i/o6l!*oPJ%,XE*pa1>g-]6/NFncRLl'/b=C<)t[uUnZ6d&f.u7([mAme(=+HhIkC8?HlM]p@I^F>o@GbTa1"@l()G4G2h*FNI=*"+D.2t9Eq@m<hoE2lh\I!R.Q;i/o6l!*oPJ%,XE*pa1>g-]6/NHBZ\m7#^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.B(hZah0b_jn<^73Gke+.CS`ATcl]'#RI\URqX`ATcds*QUcpWr03fLG&tT5UUircro-gZ[Glo?Hp_5HiCB)kG`5^6tSikOhtoUN?=Y2q)jJ]KV6jFDh-SApWm;f4V@JcL!jR9[E\/qCK)&\c(]\l+):MT>`]L\,Uh34nm:gUO4#O;Y0and()GRhQKN<a<rjKD[W*T<9F.Je;eoSoD5oB\Nsf`Y=\-&-T/)G$_859k5CauNdOarqR4b]6"SoV8aXSH26Up+SJtPjrL25rHrBS4[a;QNA_sU^mDpY4YAXf>)cYtuRad@F;Y0and()GRhQKN<a<rjKD[W*T<9=>azzzzzzzzzzzz!'^D_bZ%t[~>endstream
endobj
11 0 obj
<<
/Contents 23 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.e28945bebb594f11c83bbac90aba8c31 10 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
12 0 obj
<<
/Contents 24 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
13 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3621 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M0s^+9$q&Fqs/,gl02gI,?@ebho'Q]Jl/07UWiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiDfkGE/^Sf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543?FKH_.8gf@`Q543<T:L6DAacFP95js7CoqI1,a\g1nM,du\.@M@)!,\H0a[c.'l'@468B'(AIiVI!scakB6js7CoqI1,a\g1nM,du\.@M@)!,\H0a[c.'l'@468B'(AIiVI!scakB6js7CoqI1,a\g1nME8+I(13<%Nhn1a$pTDq6,-FLk.l0-Qo?B?QF71Sq,iY7l\2&-s52X"/1\\'#7O#!,Ke"r<n+4dc1NW"7D'\,0<4F])o02'E1Ged.dRGF`8VJZN2t-@_4#:sH.W$HjRT-dPF7VTQ]JDA<Y0oc`Y"=<t1H4dobr[_&mj`0"igNiV13<%Nhn1a$pTDq6,-FLk.l0-Qo?B?QF71Sq,iY7lE1(!:o7#<g2n7*+E(qglbV$&D^[(h;>ipCC-5Bct(Y?=Wp&5CQo0$?B$8/ZCCYnN+rs"hc*j/'<QeB>8mruHpK?JWQf;+#Am,FK.]AC<=A\Nq8;ScpfER"Adn)*%Yhnq=Ni%hpSk5DIWDfD-5iL3_bR9a(hIb4HU\BO\d9e*Zs0!9K7lhqc-k'OTb'oA=nM=YbUn)9^7,\C(&cHBe5LMKeMQs#P20:2JUY";&kAjt^l>ipCC-5DB*'oDO&RIB+'<$Z'V,gOSE^5$K%OdK:63%pWo*j/('@V/'g]NjTG\0_rIR2(SMEgcCTRT(PE<[>u6A\KOs%;u'#Sb'E<B'(AITk&U9X4TG994DL.@N.r%kHg&:o=?2Zjs7C/A:?_CfK$-.@V/AE)bT8UZ\k=F\g1nM,dsCWIQjgb3Nl(bAkleS9'5<CMOj@M,\H0a[\75\5-*<rY=i>::1>/7k3'(.Sb'E<B'(AITk&U9X4TG994DMYgfLq<12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kJ]j6h8c12l1*7*kI*\o,9?f"\;Df"\;Df"\;Df"\;Df"\;Df"\;Df"\;Df"\;Df"\<Cp\`Pjh^%qZF.'R_PuNV/.Qu\Lg<n3igq-1Y`-7K<mr@`.B4gR;,urs:9$d7AG-j%]pTBZ/nQi!SHX&YdR@/D'8X9(RddoW)*UV(p]rRr$HroS.*nQEqB'.rM;C0&XoR)^87lrMJmjb:Lhc5!:a0h\5RCRO,</L&n4l>;RdE]tk]1T*AGEnbLc."ZGm"e<RSTWoRXhW`Gk*nHl;K6>T\9[DDGII>0Njr'D=Vku4qWj\h^:>I@'mJ5_S%&BCB:Ei-LeG^XFlDJb^Y[Z+RlfV-EG&ReH0_YX[e]'G\5_6d(N/++l^_n,\K`^AGLhJ;\,L\dRr@_N9?!HEiT@8WhV%%\45s.XB3i#mlZ;O]qHH$/Ei%K8ZdQjs;0Kn23KYiBcC:(WIZ:WXcH_8OWHR."[..pr*H?6tj@ZiOAu1JoU[3fCF!=\cSMr'9r#0-:S=L>(;XM,"C#den46#UsaH^N'bt8qi7bJAdk<uIP4)\0Rpbg3S3'/L.V:p4$d`/IfG/W&oOXnu-S!>lE\XNG6UN8LphGmJ@4M14IHaoZ4ZeUS*GEl,hcgJIA*j0/;/@Qkc>p\%HZTE*>L[/)'bO%iG=%Tgkm*;*hiHePME][1U<Dr3H]CY03\=Stlgl@VCPuKikm_&?:Y%QiI[j1!?-.U;I4)_iglWH6km7;M(8[\0VcC=:]H</=34&Xp8;XHTYoB(D0Ss?Y;Ng+aQ<'/Fo]_U0hRT//*.%->Y'\l*6G@jY0C3"'YZ]r[+)[^I,`B1E(E&BY`bumYNV%!SeOlDqb;n/9dg?'q4ZdOS[Rn3(plmZ16:2P'<P$)Kd_+g_MBD*%B-E":49?F9BU#77q]JFmA=-HS;S>Basg/)`%.GgIUKr`=pD9c$7.r?]EM%oF&T#0%=e2CqgR?]ZEgdI$DAl%B#Eb_)MPfu`=$>q',40t93l`5*_bh8Da2V@gYE&Vi]VQ()KrX7N+S=QWoK:W#DG;ElEWnLkD^VJ[LfpSiuf_I1>2fj/>WXX.N$f'!sSZ-@VSc8b81M:@#g0j*Y^tm5fRAr0tq(H\GF*=G78_NQ-gq8J&/+4sePah.#2X&$&Rs;:1Y*)Dfq&XF+0&4*r2_4>HWK`n@iATo:c'rsanue-J\bR:aK.brlM)QsR.]JZ%n)61g_TeQpZgt8^9"Fa=DM'Rs]SSQtmjb<$S+#;okebT6GdHtDP"N:\*c?T_gq8J&/+4sePah.#2X&$&Rs;:1Y*)Dfq4>O->a^k$GP6E&U7Dq/Eb2uZ:#;@QVo5`CRI>QfpEnTCYq&o<Hf*ntM0;MXb*DeZ2gK>9SJ&PTA\Rn5hOA2V`3*@5msUDo-JBoR.p(k[hrMJ-G"V,DQ208qH#KFJ'"`0q^@t@YRqO1PB*FWiRqV!0gtDND[j)'CR@@mQc.q"K*4I-rQ^O=VkKVf8qAcsh"dbXHB]mXqg%l\t,6aF%RqV!0gtDND[j)'CR@@mQc.q"K*4I-rQ^O=VkKVf8qAcsh"dbXHB]mXqg%l\t,6aF%RqV!0gtDND[j)'CR@@mQc.q"K*4I/H/+[u@c*aft?=b>Bh0TdDi&]&hZ#a5_,3@X-'"?d[bI+Tr?=b>Bh0TdDi&]&hZ#a5_,3@X-'"?d[bI+Tr?=b>Bh0TdDi&]&hZ#a5_,3@X-'"?d[bI+Tr?=b>Bh0TdDi&]&hZ#a5_,3@X-'"?d[bI+Tr?=b>Bh0TdDi&]&hZ#a5_,3@X-'"=O(9l,HWFN$+_O4)&'[O9tGf*4b0MJMc(V2`$&:VM1Z%?AjOfAo(e)fsc*I.pqd*2P0gaa971k-dGKm_(M<8lo8]pF'n&D0tj0HYm_dY.,hI;;.Dlp>ij[g>FAbED>;bZn]&Wa`m3]_,7f'R_2.)@rC%bn(qi4UNFI4H&pCngpSTf^"3HYl&'9IT#<OObhmi&F)>mbEg^]8if:NeP&#\sQciW4&pM9BP?]'^]3OKeP.2b)S)i;]hW6NEPf+S^h^$eXpF_fCVBkScq^/j3\9GL-2'm64gc*?__7e$'Xso2hikA!e%IRLSF%o6]&bf"?D/eraR;5FSNjs"jCP-u#b?Ru1mI-jpS(JIBR;;BF-sP=qR5_B$/i2,?Y>Ma4P;7&c='T@?a^:fZ@4,XC`3-Vo>a)olftpM,]STp6RCOYoNO"8/P^*qSi\GRLrYf>U4"9bJG/\%TRf#%c(3WU<:$db\bVf33PV)6tba[6"Q^MX[f-jkU8XVNB.kWX0A5uW0<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!<E3%!EW-4.C'pL~>endstream
endobj
14 0 obj
<<
/Contents 25 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.3d82fbc7fb155441d517c93ed9ec525d 13 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
15 0 obj
<<
/Contents 26 0 R /MediaBox [ 0 0 612 792 ] /Parent 18 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
16 0 obj
<<
/PageMode /UseNone /Pages 18 0 R /Type /Catalog
>>
endobj
17 0 obj
<<
/Author (anonymous) /CreationDate (D:20251216142815+00'00') /Creator (anonymous) /Keywords () /ModDate (D:20251216142815+00'00') /Producer (ReportLab PDF Library - \(opensource\))
/Subject (unspecified) /Title (untitled) /Trapped /False
>>
endobj
18 0 obj
<<
/Count 8 /Kids [ 5 0 R 6 0 R 8 0 R 9 0 R 11 0 R 12 0 R 14 0 R 15 0 R ] /Type /Pages
>>
endobj
19 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 261
>>
stream
GarW495=S`'SZ;W'k^[])"5]poB8qH=%/@qU>)&lKpEPq`^ToUj7jAr2hAmFc3IcS3]u1M!)nO?K#&(3$W`[Iq8D'E927i,D^h:d1ndTQ!o?uOgOi!M5;;R#n;G^&c9s)+r7(.`dIamA_=.6`V?\;0&JO?e+g!YX&8_+8XHlF=W/Ia'Xk$Qrk@3aA0PQh(F/A?*p4>/mR.MH!>TZ1)N#/`,?ZD:[7;^/hZ>/d(i^,*:I4^^bZ6b[:`%6eRJG9?mY#W4~>endstream
endobj
20 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 170
>>
stream
GarWp5mr90&-_"(^Z$9?6d9G5GL7PscscphYcOP./#`K?e<CYVG]S-!4U-ndo=#d^"Qqk!%AX+SVZcQY+jo#mgm_ZobJ9C)NY.>i9A^4+L=D=F(B57X3>:A[Gj.'WC58euaPELhe*k>#kjh012B&[7,F!'.Fi#)NkQ>n`2Yd~>endstream
endobj
21 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 242
>>
stream
GarW44U]+\&;KrWMK`$TRZ#Ejh%W'h#K7+SBoIXGUUC)N@r5(2Nk9o[Sql>31)AEh'u8=l^EAVi3IoO'Kiqi%7%;@\=.b]qJX4AdRf].g4aIOJ0IN7E2uH2"iZoh[jaDCb(OO.Q\4>L2't0AZX'_QXW`mKcn@qdY_Po+mf_[DIA3%@^p+P5rOHsf;mM@n1YBPg%,*^&!jN2t$Q>3PS1:d?;-!$J*A]<=N-^O:%Yn7OC#!>ZE~>endstream
endobj
22 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 167
>>
stream
GarW05mkI_&4Q=V`F#VQSZ,K\70UM]Mn-VGWC30XdbspQV83pXpp557*eT4-nNT1`#^08Rf8pba[K^cm7B44=jRnT)'K?Y!_o-'Xd$KGh6f*pL>lGBg3YUISI%0sL[)4K&0mV"rL1P41m3aEpO(O<Vc0Dlh-D:2:fWR1#~>endstream
endobj
23 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 262
>>
stream
GarW4bA+pK&4Q?iMRuQnE&;l#G'=F5<i)ToMf'h0Bh5HsNV`ZtS4DsU`6?#NEt=)Bqa\Qt%BqQhD$_Iq%Nq#cQoDYrG0fuGMnbKo"e&rWO(kiNfrkChKK6k2NT$;R4&2:j^[W8tSl$l)65/&U0l<?jnk)ku9ZPMRnim&[l+^EodQsF:`OoA=e=DZtn;OAr-`M2OWqp>'q3C45Kp)<'Bl(2:?1Sq>(.;"Lm=KDj<!OUS8E['-\TP@MP40+;#6Xi,0^;^6~>endstream
endobj
24 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 167
>>
stream
GarW0YmS?5&-_rY`KY,2[kYm/Lm<SM2O6iq`P>d;U4n)8L^>CNbM-K67PO4$pIc#YA[N="7;k`*bg6HG-3H1I:Oc@?^rETqVk2S`hAuD3XjG$hrY$;o0o!@8O*,PX[%Xbdo219TR[PYjYB"V@;M6R`QhNDP7g@'?7l*!B~>endstream
endobj
25 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 245
>>
stream
GarW4]*cD?&4QKpMHd)Lgt?4sg)58p(5c>K-G(3oLqu4Qh_c`/,/_X?Kl\)c+FR*G9`ZLXdk-+bF:O1[#^VPk(.0^d>^]N8"Y:8_eceNuo\pFCTl8;ADtoB8^e%/BH9Q'/leU)Ketm'gGAhG8d]3*X.^/,jZXEc>aFc1K9PN;qHF6sh^&:Uu&B+.aq>DL)I_A*B[q/R0b:Eqd>ihB1o8K'3Lb)J@(88&BB3;G3Vi%7"GQK1T:[J~>endstream
endobj
26 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 172
>>
stream
GapQh0E=F,0U\H3T\pNYT^QKk?tc>IP,;W#U1^23ihPEM_?CT3!/hd>6k,goQl7B?"*$l7Jjr"7HE)RNVF!h>6AQD_Ah8\RZl9o#.%!<n4-^Rok+rh0.sU3QjVR#*Q=T.@Tpg_1b8?ts3Bstq0s=Iu-oF$".&@bPK)uhhh*O"$~>endstream
endobj
xref
0 27
0000000000 65535 f
0000000061 00000 n
0000000102 00000 n
0000000209 00000 n
0000000321 00000 n
0000004077 00000 n
0000004335 00000 n
0000004530 00000 n
0000008451 00000 n
0000008709 00000 n
0000008904 00000 n
0000012557 00000 n
0000012817 00000 n
0000013013 00000 n
0000016826 00000 n
0000017086 00000 n
0000017282 00000 n
0000017352 00000 n
0000017614 00000 n
0000017720 00000 n
0000018072 00000 n
0000018333 00000 n
0000018666 00000 n
0000018924 00000 n
0000019277 00000 n
0000019535 00000 n
0000019871 00000 n
trailer
<<
/ID
[<11a4452bbe31319e89fba7a743537da0><11a4452bbe31319e89fba7a743537da0>]
% ReportLab generated PDF document -- digest (opensource)
/Info 17 0 R
/Root 16 0 R
/Size 27
>>
startxref
20134
%%EOF

View File

@@ -0,0 +1,181 @@
%PDF-1.3
%“Ś‹ž ReportLab Generated PDF document (opensource)
1 0 obj
<<
/F1 2 0 R /F2 4 0 R
>>
endobj
2 0 obj
<<
/BaseFont /Helvetica /Encoding /WinAnsiEncoding /Name /F1 /Subtype /Type1 /Type /Font
>>
endobj
3 0 obj
<<
/Contents 15 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
4 0 obj
<<
/BaseFont /Helvetica-Bold /Encoding /WinAnsiEncoding /Name /F2 /Subtype /Type1 /Type /Font
>>
endobj
5 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3461 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M0bW:r$j4o4s3aL9.o/:sRKC1+V[Po_hnP="8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.EM=P;M`ictLh5:'u?=R'nd;IE]5Hh=BmqB2p^7X$0Q$9UiFPkD[m)hEDD7]3!20S(%m5Eepo,>73Ncpo[qg"0,Gt5J@p\hbEY.UOcVYbgK@oqO7DN/KYR@egT:O\Y"S^Nmne(=m!@M;\.mreAj`lssm2RjQmR*'f[]=0V/jtsN_^"C8&k'PptV(jd(Ymp-?-DiQUlg??aR5p7DE%a+(Q2+a1De[G>Bl&EKZ&,I(pUY]E@qJJG)r-?G9P(rih-1dREuNfk?>O(#o=aSKd[6HOfEV(Z'2t=fFn_3AbT-'E'3sZfj1^M@pYhQ7E1%B!q_i'CLMJZ]APP)MgR*7.Y/pg53RP?TA*/3L-50YH7,u"@RJ5[/9Q6C5NVbVGhM5l%_.?@umb=+S+0N]gQT<I'De%pX\0_kok!\7DNLBP"RS7[g'92lIB&8;Y13$pgHd09iRG2p8o0-ECM)-sFC\FmSgqH^TpYhQ7S=01ZZYsF;p79@=&(b@ObfogMI4I+_mo8Ft\0_l%B"lm`>FE$MV_[_Y246E[o=\bnb0967Q$FISai'U8mksuCAo?M*bkl?R-I0h_YM$B?F8J^DhM5l%EG"?[c+]I2gNP.=5$X;.1Gdp(p8uQo^/LHoiL3GZR2\raAeSG3ICLU;>is%i\_.+PGos32"IH[hA8X<AA_r2X1;RO>4IM[5E1-IZRS7[g)c,U.'3s[J\0_kok/NUqf`[Xe+0N]gQauWsDDo=BhM5l%_.@LHR@?oiRJ5[/9Q6C=:Zc7&>ipIE-50YH`fmsd"IFD+`\u,TrPk$]NL;edD'YO[I2buE1hPl,[ZP+_p2)p[e!QQPfLD$lgUH]`:1Im2@iJ!ODVrHt3K9FeNGTr/\U>Dmjtp]41q&NWk4WXSRF@Oke(@-QRG54@A56WH:1G57Ao?MGP<"Vj3K7l$RCR_b:ZaKGk$5CC.+u(L[aFc^qb6b_]O]p>fgaTjmPE\no9+M@B,b.F]?bTVcV*tKS8EA]mlo3K5;1^!EOO9f^ACUurOc[u`n<i5qsH8rp[aPr)eU*qn%6nfhp4shD4GHb^$e/6I6TC<[rJk(otL;sp\ha8ho=>=fDB;AhhFPj09^)KAJ38&9VV?L8MpH&M<8.ldJV05RX^_no.Ld9qHZg`b02a-m$D"%2.\6nf;,`[G2:]5WQ\V2c@4Gh=&YtOF%n^mA_13^REE`2l0OaBG;Wq]1Y8G/?Zt8UPc;l3PKnX1F]VM=136/NqdnAb9ps/J2<jIo?$A/;.Po\PZX7lf=5<1=SU;dP(A-q:-FpT?Fn1s1>L9Q0S)iGGeB)@_DF)%_Cm',a;^\2o]*8-oZUsS%9V$PXmM>H\bU0m00m3&T\6I=`1RmI^`mi+Cibh&sc>8Yj)cJ,VM7Wri3jVEGD+pLJ-LMZAlc^]d[kW$rRCHJJs-pTpHZ/#)PI^.2/_/Gu9ldRQT$2WWCT5#pBp+rKo47:$?VC&L8X%rrR4!(5rE?5)8Xe^PcTIWmmak?b:!t:GHfiH*GJBI/CQ^$TfeZFd^AG<;?^!=gc(929pYE$LqO43ODYD;<\aOu!e^l'@EjKDMb^K5$WP0]nZLJ^%HY#u61\3[enc_V2NHb$M)gp)%RGYQ;01^D,]VFZHi02I1r6C:L6.0i7*Bj-$T6+]-GAcILP+EW]kd`YIUbagAF!G%Ro\=[]cb7.BSXK;E)u5)]kJfT0mL;AEbfoP2a;6*b2r;r'Dt$>2Aq&o4^*)[NnW'2fK24Nao/eo%"\I%"GP'Z0I+"FNhmnk&i,4+8D*45U9lOt5(Y:H%gNYJ4S)E#I0<Sr*[ddmG2Slep?X1q4Cu`XmCk?Fi^UTlGfuB5df`]o]IW7MlZ]->RZO*cDrSi.cAfFP.AeSDgqSi-Obr20;bpKqYoS`%'Rr(9URn[j=kSMi,2qrR42k/aZcu)c8-TRTajWn-2h:0V>:?H.K8QTXcol?4Z\QM\UQ.esGSE+3uQBQEeG#L%A3LQAu,[ID*eB:EYk%6VF=)'\eEfuWs=\dD1g.f8NjCE.oPB<XE;_KLYR@E:`?)cZ0b=PIkAiWFaqb5fI-eXjPMG'4JbhVF=>A92cbB:e#8i1-tFRQ=g8G;/Vi_h'@1H2o><Z37\Ea<[a&ri:uh0UX]P'smD\5\=)b`2&(Pm5@E>ZY116t>@KpYJMpA7)Ji/leW#F/+)#V*VC?f+jW%d?qJl]slE4fpD#^99j27h!!U!Boq])FiC1L1hLXTfG0bKqW+XE:1[0q:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9@iJ\8WL&g=b[jOE:;4>9j"6dh3D"AU^/LHoRCY]P]2P,]<+kV\Q$K"$)s"^pPrVEYk.Xc^pR/TYm^lDcP>l2_4-b)`W>jp44-_ftFlpD:RJ3,\612?`R?LT_mQ6\ZT;`dj^,qT?8Tj10;jmBJ\j>br;jihKBC7jHH(V&TjM!^@3D"AU^/LHoRCY]P]2P,]<+kV\Q$K"$)s"`VbpKshhJ;heIJ#,$L#d]nmrG`@m^r4^I;<3g8o>f_?gbP]CkDQP]k60U=20o&8FDiA/iT9X^3d':\+\@Uj;*pUjhAp_-FiO$C\FlYoddS,jF4Z.EjH)?]D%bBCL@$4DBZPtm^q7jK)=uLB&D<D^QMelm[*f'2k/a>H`u,3p=6A-(6\RV^<=bJ\F89ip8rc9/%LApI_"ofZO-'3pR6MG?i<T7+h:tJ]8a.RlgO*ANA"r%CuY<'3^MfLff,D1riT#Cpi?)Q-Eb+a'/[FnIC"drn*1%805'0Yiqg8J60$/A2k.>VY"m@=Eq[a)Y.q"N1qoK.Z\e#:l3*)"BA[ObqR\dSisd?'T6oD_R;-aleNSse-CL'A2.`f0WDraO2OS)NhURji-Dsc/e(A2o3I+\)VOF#I[81:r8`o)>9poa:.b-_B9dZ9lG;Ws3af/8:1cCb4:>XNcW@"N@mF0]uOu[eh;l6"R9!qH)P=aot>tp`%E[oU'ND1afPBSlqWl_5>q`ONacB7HTe^^)FjdQ)cmKTR7qbD9Vk'+?_^P9A:.ET;&?(LdsY0!m+DK&4Rmo3A$I[=j@CUb=RP3b9\eX>=VRf")l#,`aD:3C^AGI]'8L:b8NahC\ZSbZQoafjZ@E([G)<**^]QYZ/-\/Us$loWbJRG[+pr#4u-V^2.7F`lhj\L&UoOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>jOsEV^,Y=.E8Wk>j''C?8XL9P~>endstream
endobj
6 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3802 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M0s\bV$q&G/J(%!dMEdPCi+mj":+lnVm=5/<WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!Wi@Pk5O39*E(tLG]=Iff*nLVJA9YDM]C4p&D/`3m8%Z>/INI)Ff49*4S%J.PEG]YhG"90TqgKU<#1mC0[%"\rkAb?X9m0%=\bggsf*9i;GI3jOn)n\-E(tLG]=Iff*nLVJA9YE8`&<k(gpb*Wakb1/R$fb8%41WA/9eF9?C-M:3>:.D17um52nS3pR@'BuYmi#Nq(-`rCL3?aR5kr8:bnZhE]:VmFd\Vb1U.B8oD'q]ZYNm6M4$@;gJBYtcZ1tVk&m)ZR5;)W-1gp`GI)'kQ\h+j'AH>=T?hO:]B47(R$fb8%41WA/9eF9?C-M:3S\hTaNT%`ls#mgH!Qj4iL3_VB"i'SXj#S5Y0?c^9e%nSh_k:3Ao:pVk'a`d'q(KYRXH\B2m4b'5$X;/bfk:U^6P+Uf9LV'Y";&ok.>6_b08,rb.J8:U`qVd?*eH\k2pethM5lEOdZ$Jfs`_Xo=?^G__nF7b^A%/>FE$I?((HGRT*g0^2*GEbhTk6bkl?R._*^*2DS2[>]!0Q26#Lh]@t>"V/Wgs&_Eh1bNrsg1R5i-`^F'q$8/ZcNQb?/<uVfm9e*ZsXu8%6.9p^``&4OP1:u8]9Vl^90bN$5K$3Lq,;YV%e](!^P6rqKX,Y>&$<J/[ED7pmbcWiZ;^ksg9Z/ffi%hpua@rFL[4r9FUr&DmY.GX@o.%8oqf#7Z>ab&l]9,+WK$1e;f;-9Kq6%=KRI_o<bkl?R.U8D1]"=(bYCas1&(dVoQ2.Sp)k$:I\BOhh9e'D9n%-,n3Nn%X]FWVi_Njr"],R10._*_"E(qfeRI@`!OZBUsbIeA;Ur&DmY.GX@o.%8oqf#7Z>ab&l]9,+WK$1gQ`&9B[J%dp!1M:0/cGVUr^Y&/&R@-&K1NOnOOnltfDpHSNR@'BuZ&Qs\p3^pnB$S?=S;D)nI^/(*1Ga)!B')d',P-gVhjg+&1GL[u@N1_Bm.oefbaaNX3>:/gr*mu2B4g:"bflL-7ckM6^,[u*B4>Ju`&9Bcf^m@UR5kp:F-LSfP;;UFZP/nmh8e@Go9=MJSt5(-mlp0RT;Tmpna&[,H1u=QrB+ZJM.\1scb#7Mn)l.k:-:VcH/<u)I6UMqGN.;4cL^Jame_:P]G&UdFIm[uGMg_Fk+-a?U@Z%p\GL`H1@N/f':n=Ba5-L]P^+XJS`i1S:$;MiroS*pl:LFDl#5ujWh/;NTDmtjV<UY?s64Ii<iVbPrh'2PDPcVimf2NimT-?ZjSkeNk&.$\8acWCDsgZ+T&fmCroS*pl:LFDl#5ujMT'u_giql&bY?$P;DN;PIuS56'/O\CEN,2Hgs'3dg<mj_gdGBs,r9c:f=5u\0f(a#QV,BDh>B.5*DOc%9uNl+135C(NGD#t1NO4LWKbW^c!TMbdDX8a6sM2f1O&HB\99\`1H,o49$,<5r&(Vt:!CgU`2-?eF#ST.CI]$oEt`PmPg%q-p$8So50Gf:pTDB`o>IdeE>qlek2kH"\9ab!<@2U9$J2rG]*_Wc'>)E*D,B2[Q[;fml?Qi?RCE\U>k<").U5,4lm`[X6%s)\mTkOIkD<JUi\m`Sc-*f$E:l>uX%_:Q's;hmG879P-[(c3gRuD@0DDWdeap_^13"74\E;o0<Od@HfThA1$_3&jA>\@(5+gZOeMCiI)k!\8S'PaAEV7PDfLBI&mcO,oI9\pV-FT&)MS&\3Frsf3S;D)nVW)JWTA@Q<1M:/9RAm]ccY)ulouriV7V4.<k4@8S>Wm-T[ZQu8]D6ht+'6dYM/?e#ibgrPh8e&%B4;gJWOSGDBD58Lk*jNL;.^ci]A-QNk'`Js\0a"UPhCf^/pC@,Gork5Y0"m.A[\;FbFMXNhFHgfCGKs$R;FA@EbCZ,pJJN]R`sPaAfJn)gf`b$I?^nE-8Tk1fnT&MG?28"+/LKjaikc:[\:@WSQ0Ra8*PnH135sRD(+jnB9et\;7bbUbhQ'-)p5eJ=lndoPchMC1O#):L@Qs@<k5d?5H\YegJCf8HhVng_9Mj7>gLR;Y3f#4pO$#Xc20A'ccXM8m8&-(Hre).q__X)b0@*V:OMna<l*&X2-eJMc*$G0I.r"h_9Mj7>gLR;Y3f#4pO$#Xc20A'ccXM8m8&-(Hre).q__X)b0@*V:OMna<l*&X2-eJMc*$G0I.r"h_9Mj7>gLR;Y3f#4pO$#Xc!(Lam41+kr*hH\<<DUC:i=#EY;3iTbZ'jXq_j\.n,BAHml<1(-/cKHjlX2T.cYm$9N;D/DaV)2m_?p>Shj;F!q+f>DSH"O1;;qq<0`"22O'^"ri;"H3AXY]i4]Z^k.CXCj*T'F<=-0R6b3$\^WQ>C1K.9Tmb:QU^AG)h^?<^>R?[Y^;Wd)d.\>tgaipnG-K6rIU<LfO<BSmXPF5]n9Z1ep7@BK(X*Ce:-SrBgR#'LiM_Zr0<q0ER:M+dX0bLl\(M-q@XQ$d.T$6P9@j5fC0$:i^=iGI<4IOmQ`^JSd?'KWFZ\mqWGqu_,NJY.S\g[/jB,/a8o42H7)\[31FA_2^c75FOk/kiM2C@EAkb?;ESQ.f(c'+Q$D+ldkX3Ae3htM2[Q('R.m5)#,bfo6L>\<*[bZ+I5Ca1b-3>Il`\N)Ir4ql\lb^<1)AQE]o9XW3b2DR(\;fS4jRkn"11U2q`bi_r1B'&1!<;T@*,;\pO'pi(63A]2L'ALUPHqqXl<c62V[ElcL1,GP$ELE)]1K3ZsZ&QuOk>Y\ujlXcdqf"VpE1(!K.&do)^!E<0'A'RBbNrt28Yrr_IK$mI\=OF?AlfpD.`G]^oB4B:orR(,]@p(Z.IMMS5AqmOB*FXTeV95ZES(*cGr'G'/%Eg+2O%0u]AD_sRu-H6_.@etTqkTAofdl^9O,mN0!hSf-'dJmp\FDmid+XER9aYXW>'Qic_!-0f<^(PltSmZV7>i>rk+Z/Se\EPgf`b$b^$bCqb5e^`3[V2RIbHepR/OFM.`*C1[!jTmk\qk'@/-eB?n\3hIkg`-D"4TcCC6E][+MJ9K]B2S2i6hH(TptR!;ZB3HJLZo0*hs0_)5bF6:,?k'\Ro@H(GOk0/+]bkk)h_Sie'c'e0DRJ3,Z"m%9odCu(bY'2\4p<1.m[D?G"]NO3>2j8$lgq-f1>is8"'s;a:\b=4[bI,/tcFd<=H8h%'^YHG+)dMOLRh`)M1V*5&^!!h"A^7qkiHdGHCVSZ:>T6r1baT?MG;Hf'bIu*,_.Cp)=lFZcPA@qg]3H:[k00;0Y'2\4p<1.m[D?G"]NO3>2q,]'Pg">YiEL/<n%'jG?PBoAA3QW.[DBQBR]([0gk(^bI+HgPpTA;+qDE7#9'5_J<u]r,Y)2Gq`%<C*cY$O#E::@bWa?FKTO^6YFg]'!l^Fc$:#(>`q0^cD#5>99UA?`e$VKRQ=]ZQt]<7"Uf>K6hREoMOD/esT-E:Dd\"I7qT67QX^$D,cIOSW0->m.mkKFrd'7J+eqg&o70@t:-Njsq[k+-q6M49jt3HI^G6soQ2^>?fQbOqC9,As6ZH"UK&io]?KcJ0!jdFZ%;Y;^ImE]n('Ln!UDhV$MM\9YWV1O$c3oJQ+(lV60I>gJg"i[4MjGP:\VDI0L/bT1[:IEakNH4r4jf5p)7\;@5cWiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!WiE)!Wf$BiN/nET~>endstream
endobj
7 0 obj
<<
/Contents 16 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.1ec20b3a96e40a35266a0a8a0d634adb 5 0 R /FormXob.bbe0a4b6628a0e8125ab26a62825893b 6 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
8 0 obj
<<
/Contents 17 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
9 0 obj
<<
/BitsPerComponent 8 /ColorSpace /DeviceRGB /Filter [ /ASCII85Decode /FlateDecode ] /Height 290 /Length 3500 /Subtype /Image
/Type /XObject /Width 290
>>
stream
Gb"0M0s\bV$q&G/J($uOju)%+%1&)hn*)-NT`"8nzzzzzzzzzzzz!!'grT65NupmP_`]63jRce!oT8TqIFGMi(@D>9Q18%Wp<?-h,WY=WoE>BeutHu8YIA4O7SpKc+sL9F0lZs.b3omCWORUeq#Fn]1ff7pJ#G-kItht;A6pmP_`]63jRce!oT8TqIF@N3&*FmnO#.ck97`6:E%D03I(`QWUU&i9D1[aFc>'f5%G8^-ObfLFJ><m7)c-S_r'@N/VA=YXu(T>\r;M/@@JB>r)?I1e@5,du+nSeX'Eoh!BoPLr@VHWJ@\f-`;Z:LY8Kmo_Ad?D#0[5)F,u]k>=.H$p;]q^?<O]sAA.1XE_eQ`;S-5/&?Y1Gd@ifpA]ho00l8'f.Yl]\/XO/+Ys=-5A<mcb.qtW[m[)^*XRN1XE_eQ`;S-5/&?Y1Gd@ifpA]ho00l8'f.Yl]\/XO/+Ys=-5A<mcb.qtW[m[)^*XRN1XE_eQ`;S-5/&?Y1Gd@ifpA]ho00l8'u%mtD9P\Mk\;?)Y=XF$F&s;:;^o<38E=PaiQL$,`lqD>Xu6pgRT0&;GI9.]Q(k==7(su_^<Bl"bY4ksC*SkE8VJg=<uWqo.D"5(jD.ZPbM:XfbZ'J&2A5hS<;84m[4sJ&U8s8A^*XT/b[#)09Vprf,E]0$KeILK)`(DA]%T^9CJs-7XQ[gn40.j^hT+6D_O"EQQ.^@^iQJlpY=XF$Z_AtVn#XBmGopCW$=@C6=(^>mKeN$]^*XT/b_iRI^9\/Rk'_VO.X[X!?($+R'u%ohpmP1W1+Tpkqp$[=RJ65/WUOJ"FCk0:<VS?<j(hQObH0pMloV9;A_tJZUr&I$d?WC/<oM67:LY8MbP\bnpIT2]CRMpqmllSFHnFsAk1qDiNNZpmg:[;.[dgcL?^l83`&>>qq.oTiPM!n,14O/tI1k<0>3<$5]2)lT?d&ATH1smHj(k't2X`iUD'W$A9g"p/H/<t\qlZj@Rs6j=o=XsBpK^R_2t:^YkBZgdm^o&GDrTG<ch$SRh02"nhScaWT'+q-]C1'g]SU874jU`9GMi(XGn\LNHCf>Qm_8!9o-U&'oK;S+h0mmRk"Rt-k]u$5])/Y.baWi8dIY"AkBe/\3oGMcAUk_L);rMA#.X2i!H.gHJ/`tUi5T+.\FGmdDZ"(U:O*o%bOL")-=7^7DrE74n%8HFqc1)_qsI.l2X9/9=Xr<QpJLXbCr,l%R=&l$]nNdl^@1KblrVkln%1COg8K?+B;p:9h+-/%Z3B-0BC`H-pD2%Pq7aJ%Z<q/N^@0A.CSU;LS>Ge)G9:D2aqfB^S]TJQh-2j3jnnI0b'oU-pqAhRYDp-&E0eZ@h0kOd.U2CjG:$Z9F`64iQ1)?^./R#Qi;;q9^,G95_HAAGGP@N9hjJ)bTkmPnBP&2>q6mNRbVk[p.ML'C@j^(Kp6jTgZ9`&rR;L1/gVQ-1gJBf,9Jj)8R=&5kB4`+*#*k$W[P<ta$iA.a6eS+fdEFL\nnhg-R;F>k<$n'e`_=)ulnbsWAV8,n1Y\;=[tT6B[\7M6R:p1O1\nJ`cce;3%4W%9Cu];YgRj<\hRi@\^S#q;\'`3BG@'2DFDp_.g3E)3$iGVE:#8>Yn(i8??dQL.gM#W\4"p(2\i4mRD7k)U"b&c3-?#Z=p[5]00Bh9RD7&iiSJV&)h4)':2Vu(;!l(CTPIJrZHZrfS**lhrDLp#UDVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]ds]kIE*doS^q;DVjQe.sCQCb]dt()PtR=ZX#Ne^?[k]n7>Yqk"X@5,N$b[n+t<ZI$k_`GnY>faEOuZ]=tTY?Y5"1hF(X2o%i[0Y4&I/QW`::2c81eHoLr:lT;0:AQJTg:"6Qqhp&n(qT^R<R2*G]'6W]`GI-bM^9\/RAqb0[6sVnFh<b$An#XBm=lGi/;:ghU2uC>T40.j^<qtfOe?pOYc+`ZCc7440'u"rJJT(G.c#rLLN&!'U(Z28lDQ`kTc7&8cJ+:35jlX/Sk);&Kn/'u_;f8c8DpBd&!e9aR3p#M8s5o7q0CTe8X&Eo=qesb.o)aF3]fP9;])UoO1,&,5hlB[nY5<._..[Lin\%!Fk.:TTN&!'U(Z28lDQ`kTc7&;D]rq>1..d;(9\b4QZdNohWf>5rbj0%"E=9M)9$`?o2DU%CYHQ'd/bh(O4X[8`a;i@8^*XN&i6/4oS>^0IF"$YVRS;Lg0=0)JU8j3sU!2h<13!]9bY$3<W\uVf19[n'`%Ca>.m58[g;k8V]Y5^+\)>H2oUMjp,BG:)qO1+5JhOIYF/#[obb<8HCGKl;^<B3qM%\S,H5f%lH95tRk08_qgJBXiTC$Zs\'m6IhOH"!%41W;fe.Jp4)JKic&!(f:bk8-m;f,6dl(gpS1(WO-1g`]/pDV'D.D_QM%\Q>1-_DuEi6Cq2J1g9.'X4-oCLWfGBu>fA*2$m'&-5<5G.=`Vmk,5B&9%+Ymi#No@Ya?H95tRk08_qgJBXiTC$Zs\'m6IhOI,Nj6S(_kfW7]G@i>d]6GuK^GF1_VGb-dpCd3^o5%kcjh#ajEPF<U-Dj\TMt[kY47d8t.cn9e06+`_cR,Me^5M^upH.t_@OgKOGV='O1X@DF;SJ(`')+KZCgnmU]6GuK^GF1_VGb-dpCd5$SD?,d0=+!u_ELRn>rts0m[M:a=eTY?+/Q$@*@YXq:#sL!:q!ThdT+nZPdC66nmtiM>M)I1WbY,IfmOP01+SS@m%\[Q[3Of"^576*(!7<c;7c&HO`GX&7)$kPAIJA`?$5O*3P02R?Y5"tKmf2g\osm>h)CHLZU3?^5"\m^4&XAlS&gq!Tkn-ZV5pa>F_*aPG"-*$7)!s@;;.u'G3*prREq=mOkD[UDr,o,2X7_Vq-@@iZY!i\p.aV;G9<Z@\ntMtf9c<7fbp3+'D^eH7qn`9gQg[hANjmQ7V:OG^3THMg8NbLj`c-@c^LDeff,%3hL1VHlF(!o?!la#AnPZJ:#qdf+/Ot.D-)2<Qhd`9)4>mdq<$L'BqoS#Q/D7G5&5=2B&?"jH1t1iW7uLWGC>n*R[oSo2j&%8I1k:217u7s3aHUt;K^6R.'Y9Ko@YXqe('1+<S+m?'"_$sT=r*&?#B@7Fj6C(Yq%-lfdj/QeV5_Wf=ZqQ]2CDV]tug9D>7"Oc'p,d.jaf?/$.4ML+cQY]SR95;DOlX_E(t>pel7ZRjbNl-1fe?XOG^S03-W:M%[Eu17u7s3aHUt;K^6R.'Y9Ko@YXqe('1+<S+m?PTA#EbWE/3Y5NP"]T:Lk9Zk"(]B\*gf?O1@?-T1h40tujrH@#0O4)QPb.KOBlIp1.c2/npc(rQFZ`C8-G29flda_%6]JI1bg2GTfq^>apUs(p,X02DEh7SfseP+,u1V;r+DqE82-sb)nbWE/3Y5NP"]T:Lk9Zk"(]B\*gf?F9qzzzzzzzzzzzz5k,pqe-_`~>endstream
endobj
10 0 obj
<<
/Contents 18 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] /XObject <<
/FormXob.1280b7d13f0587f75dbba24117c3e12a 9 0 R
>>
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
11 0 obj
<<
/Contents 19 0 R /MediaBox [ 0 0 612 792 ] /Parent 14 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
12 0 obj
<<
/PageMode /UseNone /Pages 14 0 R /Type /Catalog
>>
endobj
13 0 obj
<<
/Author (anonymous) /CreationDate (D:20251216142815+00'00') /Creator (anonymous) /Keywords () /ModDate (D:20251216142815+00'00') /Producer (ReportLab PDF Library - \(opensource\))
/Subject (unspecified) /Title (untitled) /Trapped /False
>>
endobj
14 0 obj
<<
/Count 5 /Kids [ 3 0 R 7 0 R 8 0 R 10 0 R 11 0 R ] /Type /Pages
>>
endobj
15 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 164
>>
stream
GarW05mkI_&4Q=V`ET>Mc.&b?;-+rnF+G1',/ca*pUAWf>EIgiipknB9RY?:(LXAhG(PtB're&F\nA'4a%aIp+-h"W@9D(-_!#HM*Z:^`FOHNUN-;cL_m8qodCk0^Akq$V4PUhHkMQ7^Ejj*/Mgoa@anH4$%uMYhOT~>endstream
endobj
16 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 306
>>
stream
GasbT4\rsL&4$!fMAlpjH?ZXMZl9aE"XK0lYb%&R$>PaP&&cf78Q/KO<^N*hq5!cWA-.^JqQm+$jrVMM!b1Zf5UJAHTA[O5a#Wj7rghI[YR$H&I1;qf=XrnMp^cl'VsjlS&LTtc"96#@mAfhO_96j,pMI1=[Fq,J31>/c9Cp=j48@sD.G@[NB&N,!B3"QR<e[.?4qhBfJFW&(KAC_sj$gSZhqF0T(5sE#WDeG4=a!P:El$187`l8mPLohVAbL$Y+1h)kXj-Qc=,3AkKqKh@36/9mF(-YNS\uo1I$tLMOSf*X#]ps~>endstream
endobj
17 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 185
>>
stream
GarW0:CDb>&4c2<MVk["e;d_gmE_c)_gI#HRqtN;"kVtu-`3[N_p#oKpB[c<lE@1;N3PAV-jDn$krJuTE8kQlWoro5dLb%\m(;?dMjquqaV%j/=&\ojFOWl2lj!LDQGM.KJ9B`,kY59<$TNAI\j,Z/r8d0V5!390b?H4<mLFNoiYn5PLd#8q4Ic~>endstream
endobj
18 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 244
>>
stream
GarW4b6l*?&4Q?hMRuh(23Tpmkt^cRET&,M[qV:g6EQRMs2I6!U^T$](X?G+!r#.NZmDs+Qu<6UTF4R17*n#s2&gSmk2Mk.09@0r[k9DgRu9WjKt]k!Ic1n'J.q+%HiE61]07.7.f1^O]tp[&Fn>&8hUMD]+;spqhR>>LX`Bap0=GTNaa\,&7Zqt_h]FP:T4bD(VQ.g\Pm&EGSuJ6sK$-Os9'08g6q!hekXZP?.6B5f2_i<IDu~>endstream
endobj
19 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 164
>>
stream
GapQh0E=F,0U\H3T\pNYT^QKk?tc>IP,;W#U1^23ihPEM_?CT3!/hd>6k,goQl7B?"*$l7Jjr"7HE)RrVF!h>6AQD_Ah8\RZl9o#.%!<n#GicFAepUZ\E#$(k,.:+.(JE9TULV0b8?ts/Mf%77fd-ZP$/#A!1Zb>/H~>endstream
endobj
xref
0 20
0000000000 65535 f
0000000061 00000 n
0000000102 00000 n
0000000209 00000 n
0000000404 00000 n
0000000516 00000 n
0000004168 00000 n
0000008161 00000 n
0000008467 00000 n
0000008662 00000 n
0000012353 00000 n
0000012612 00000 n
0000012808 00000 n
0000012878 00000 n
0000013140 00000 n
0000013226 00000 n
0000013481 00000 n
0000013878 00000 n
0000014154 00000 n
0000014489 00000 n
trailer
<<
/ID
[<3a098456c5603f47bfc8fb47cd3d7621><3a098456c5603f47bfc8fb47cd3d7621>]
% ReportLab generated PDF document -- digest (opensource)
/Info 13 0 R
/Root 12 0 R
/Size 20
>>
startxref
14744
%%EOF

View File

@@ -67,6 +67,7 @@ class TestApiAppConfig(DirectoriesMixin, APITestCase):
"barcode_max_pages": None, "barcode_max_pages": None,
"barcode_enable_tag": None, "barcode_enable_tag": None,
"barcode_tag_mapping": None, "barcode_tag_mapping": None,
"barcode_tag_split": None,
"ai_enabled": False, "ai_enabled": False,
"llm_embedding_backend": None, "llm_embedding_backend": None,
"llm_embedding_model": None, "llm_embedding_model": None,

View File

@@ -1978,11 +1978,11 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
response = self.client.get(f"/api/documents/{doc.pk}/suggestions/") response = self.client.get(f"/api/documents/{doc.pk}/suggestions/")
self.assertEqual(response.status_code, status.HTTP_200_OK) self.assertEqual(response.status_code, status.HTTP_200_OK)
@mock.patch("documents.views.get_date_parser") @mock.patch("documents.parsers.parse_date_generator")
@override_settings(NUMBER_OF_SUGGESTED_DATES=0) @override_settings(NUMBER_OF_SUGGESTED_DATES=0)
def test_get_suggestions_dates_disabled( def test_get_suggestions_dates_disabled(
self, self,
mock_get_date_parser: mock.MagicMock, parse_date_generator,
): ):
""" """
GIVEN: GIVEN:
@@ -1999,8 +1999,7 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
) )
self.client.get(f"/api/documents/{doc.pk}/suggestions/") self.client.get(f"/api/documents/{doc.pk}/suggestions/")
self.assertFalse(parse_date_generator.called)
mock_get_date_parser.assert_not_called()
def test_saved_views(self): def test_saved_views(self):
u1 = User.objects.create_superuser("user1") u1 = User.objects.create_superuser("user1")

View File

@@ -822,6 +822,35 @@ class TestTagBarcode(DirectoriesMixin, SampleDirMixin, GetReaderPluginMixin, Tes
yield reader yield reader
reader.cleanup() reader.cleanup()
@override_settings(
CONSUMER_ENABLE_TAG_BARCODE=True,
CONSUMER_TAG_BARCODE_MAPPING={"TAG:(.*)": "\\g<1>"},
)
def test_barcode_without_tag_match(self):
"""
GIVEN:
- Barcode that does not match any TAG mapping pattern
- TAG mapping configured for "TAG:" prefix only
WHEN:
- is_tag property is checked on an ASN barcode
THEN:
- Returns False
"""
test_file = self.BARCODE_SAMPLE_DIR / "barcode-39-asn-123.pdf"
with self.get_reader(test_file) as reader:
reader.detect()
self.assertGreater(
len(reader.barcodes),
0,
"Should have detected at least one barcode",
)
asn_barcode = reader.barcodes[0]
self.assertFalse(
asn_barcode.is_tag,
f"ASN barcode '{asn_barcode.value}' should not match TAG: pattern",
)
@override_settings(CONSUMER_ENABLE_TAG_BARCODE=True) @override_settings(CONSUMER_ENABLE_TAG_BARCODE=True)
def test_scan_file_without_matching_barcodes(self): def test_scan_file_without_matching_barcodes(self):
""" """
@@ -928,3 +957,163 @@ class TestTagBarcode(DirectoriesMixin, SampleDirMixin, GetReaderPluginMixin, Tes
# expect error to be caught and logged only # expect error to be caught and logged only
tags = reader.metadata.tag_ids tags = reader.metadata.tag_ids
self.assertEqual(tags, None) self.assertEqual(tags, None)
@override_settings(
CONSUMER_ENABLE_TAG_BARCODE=True,
CONSUMER_TAG_BARCODE_SPLIT=True,
CONSUMER_TAG_BARCODE_MAPPING={"TAG:(.*)": "\\g<1>"},
)
def test_split_on_tag_barcodes(self):
"""
GIVEN:
- PDF containing barcodes with TAG: prefix
- Tag barcode splitting is enabled with TAG: mapping
WHEN:
- File is processed
THEN:
- Splits should occur at pages with TAG barcodes
- Tags should NOT be assigned when tag splitting is enabled (they're assigned during re-consumption)
"""
test_file = self.BARCODE_SAMPLE_DIR / "split-by-tag-basic.pdf"
with self.get_reader(test_file) as reader:
reader.detect()
separator_page_numbers = reader.get_separation_pages()
self.assertDictEqual(separator_page_numbers, {1: True, 3: True})
tags = reader.metadata.tag_ids
self.assertIsNone(tags)
@override_settings(
CONSUMER_ENABLE_TAG_BARCODE=True,
CONSUMER_TAG_BARCODE_SPLIT=False,
CONSUMER_TAG_BARCODE_MAPPING={"TAG:(.*)": "\\g<1>"},
)
def test_no_split_when_tag_split_disabled(self):
"""
GIVEN:
- PDF containing TAG barcodes (TAG:invoice, TAG:receipt)
- Tag barcode splitting is disabled
WHEN:
- File is processed
THEN:
- No separation pages are identified
- Tags are still extracted and assigned
"""
test_file = self.BARCODE_SAMPLE_DIR / "split-by-tag-basic.pdf"
with self.get_reader(test_file) as reader:
reader.run()
separator_page_numbers = reader.get_separation_pages()
self.assertDictEqual(separator_page_numbers, {})
tags = reader.metadata.tag_ids
self.assertEqual(len(tags), 2)
@override_settings(
CONSUMER_ENABLE_BARCODES=True,
CONSUMER_ENABLE_TAG_BARCODE=True,
CONSUMER_TAG_BARCODE_SPLIT=True,
CONSUMER_TAG_BARCODE_MAPPING={"TAG:(.*)": "\\g<1>"},
CELERY_TASK_ALWAYS_EAGER=True,
OCR_MODE="skip",
)
def test_consume_barcode_file_tag_split_and_assignment(self):
"""
GIVEN:
- PDF containing TAG barcodes on pages 2 and 4 (TAG:invoice, TAG:receipt)
- Tag barcode splitting is enabled
WHEN:
- File is consumed
THEN:
- PDF is split into 3 documents at barcode pages
- Each split document has the appropriate TAG barcodes extracted and assigned
- Document 1: page 1 (no tags)
- Document 2: pages 2-3 with TAG:invoice
- Document 3: pages 4-5 with TAG:receipt
"""
test_file = self.BARCODE_SAMPLE_DIR / "split-by-tag-basic.pdf"
dst = settings.SCRATCH_DIR / "split-by-tag-basic.pdf"
shutil.copy(test_file, dst)
with mock.patch("documents.tasks.ProgressManager", DummyProgressManager):
result = tasks.consume_file(
ConsumableDocument(
source=DocumentSource.ConsumeFolder,
original_file=dst,
),
None,
)
self.assertEqual(result, "Barcode splitting complete!")
documents = Document.objects.all().order_by("id")
self.assertEqual(documents.count(), 3)
doc1 = documents[0]
self.assertEqual(doc1.tags.count(), 0)
doc2 = documents[1]
self.assertEqual(doc2.tags.count(), 1)
self.assertEqual(doc2.tags.first().name, "invoice")
doc3 = documents[2]
self.assertEqual(doc3.tags.count(), 1)
self.assertEqual(doc3.tags.first().name, "receipt")
@override_settings(
CONSUMER_ENABLE_TAG_BARCODE=True,
CONSUMER_TAG_BARCODE_SPLIT=True,
CONSUMER_TAG_BARCODE_MAPPING={"ASN(.*)": "ASN_\\g<1>", "TAG:(.*)": "\\g<1>"},
)
def test_split_by_mixed_asn_tag_backwards_compat(self):
"""
GIVEN:
- PDF with mixed ASN and TAG barcodes
- Mapping that treats ASN barcodes as tags (backwards compatibility)
- ASN12345 on page 1, TAG:personal on page 3, ASN13456 on page 5, TAG:business on page 7
WHEN:
- File is consumed
THEN:
- Both ASN and TAG barcodes trigger splits
- Split points are at pages 3, 5, and 7 (page 1 never splits)
- 4 separate documents are produced
"""
test_file = self.BARCODE_SAMPLE_DIR / "split-by-tag-mixed-asn.pdf"
with self.get_reader(test_file) as reader:
reader.detect()
separator_pages = reader.get_separation_pages()
self.assertDictEqual(separator_pages, {2: True, 4: True, 6: True})
document_list = reader.separate_pages(separator_pages)
self.assertEqual(len(document_list), 4)
@override_settings(
CONSUMER_ENABLE_TAG_BARCODE=True,
CONSUMER_TAG_BARCODE_SPLIT=True,
CONSUMER_TAG_BARCODE_MAPPING={"TAG:(.*)": "\\g<1>"},
)
def test_split_by_tag_multiple_per_page(self):
"""
GIVEN:
- PDF with multiple TAG barcodes on same page
- TAG:invoice and TAG:expense on page 2, TAG:receipt on page 4
WHEN:
- File is processed
THEN:
- Pages with barcodes trigger splits
- Split points at pages 2 and 4
- 3 separate documents are produced
"""
test_file = self.BARCODE_SAMPLE_DIR / "split-by-tag-multiple-per-page.pdf"
with self.get_reader(test_file) as reader:
reader.detect()
separator_pages = reader.get_separation_pages()
self.assertDictEqual(separator_pages, {1: True, 3: True})
document_list = reader.separate_pages(separator_pages)
self.assertEqual(len(document_list), 3)

View File

@@ -603,23 +603,21 @@ class TestPDFActions(DirectoriesMixin, TestCase):
expected_filename, expected_filename,
) )
self.assertEqual(consume_file_args[1].title, None) self.assertEqual(consume_file_args[1].title, None)
self.assertTrue(consume_file_args[1].skip_asn) # No metadata_document_id, delete_originals False, so ASN should be None
self.assertIsNone(consume_file_args[1].asn)
# With metadata_document_id overrides # With metadata_document_id overrides
result = bulk_edit.merge(doc_ids, metadata_document_id=metadata_document_id) result = bulk_edit.merge(doc_ids, metadata_document_id=metadata_document_id)
consume_file_args, _ = mock_consume_file.call_args consume_file_args, _ = mock_consume_file.call_args
self.assertEqual(consume_file_args[1].title, "B (merged)") self.assertEqual(consume_file_args[1].title, "B (merged)")
self.assertEqual(consume_file_args[1].created, self.doc2.created) self.assertEqual(consume_file_args[1].created, self.doc2.created)
self.assertTrue(consume_file_args[1].skip_asn)
self.assertEqual(result, "OK") self.assertEqual(result, "OK")
@mock.patch("documents.bulk_edit.delete.si") @mock.patch("documents.bulk_edit.delete.si")
@mock.patch("documents.tasks.consume_file.s") @mock.patch("documents.tasks.consume_file.s")
@mock.patch("documents.bulk_edit.chain")
def test_merge_and_delete_originals( def test_merge_and_delete_originals(
self, self,
mock_chain,
mock_consume_file, mock_consume_file,
mock_delete_documents, mock_delete_documents,
): ):
@@ -633,6 +631,12 @@ class TestPDFActions(DirectoriesMixin, TestCase):
- Document deletion task should be called - Document deletion task should be called
""" """
doc_ids = [self.doc1.id, self.doc2.id, self.doc3.id] doc_ids = [self.doc1.id, self.doc2.id, self.doc3.id]
self.doc1.archive_serial_number = 101
self.doc2.archive_serial_number = 102
self.doc3.archive_serial_number = 103
self.doc1.save()
self.doc2.save()
self.doc3.save()
result = bulk_edit.merge(doc_ids, delete_originals=True) result = bulk_edit.merge(doc_ids, delete_originals=True)
self.assertEqual(result, "OK") self.assertEqual(result, "OK")
@@ -643,7 +647,8 @@ class TestPDFActions(DirectoriesMixin, TestCase):
mock_consume_file.assert_called() mock_consume_file.assert_called()
mock_delete_documents.assert_called() mock_delete_documents.assert_called()
mock_chain.assert_called_once() consume_sig = mock_consume_file.return_value
consume_sig.apply_async.assert_called_once()
consume_file_args, _ = mock_consume_file.call_args consume_file_args, _ = mock_consume_file.call_args
self.assertEqual( self.assertEqual(
@@ -651,7 +656,7 @@ class TestPDFActions(DirectoriesMixin, TestCase):
expected_filename, expected_filename,
) )
self.assertEqual(consume_file_args[1].title, None) self.assertEqual(consume_file_args[1].title, None)
self.assertTrue(consume_file_args[1].skip_asn) self.assertEqual(consume_file_args[1].asn, 101)
delete_documents_args, _ = mock_delete_documents.call_args delete_documents_args, _ = mock_delete_documents.call_args
self.assertEqual( self.assertEqual(
@@ -659,6 +664,92 @@ class TestPDFActions(DirectoriesMixin, TestCase):
doc_ids, doc_ids,
) )
self.doc1.refresh_from_db()
self.doc2.refresh_from_db()
self.doc3.refresh_from_db()
self.assertIsNone(self.doc1.archive_serial_number)
self.assertIsNone(self.doc2.archive_serial_number)
self.assertIsNone(self.doc3.archive_serial_number)
@mock.patch("documents.bulk_edit.delete.si")
@mock.patch("documents.tasks.consume_file.s")
def test_merge_and_delete_originals_restore_on_failure(
self,
mock_consume_file,
mock_delete_documents,
):
"""
GIVEN:
- Existing documents
WHEN:
- Merge action with deleting documents is called with 1 document
- Error occurs when queuing consume file task
THEN:
- Archive serial numbers are restored
"""
doc_ids = [self.doc1.id]
self.doc1.archive_serial_number = 111
self.doc1.save()
sig = mock.Mock()
sig.apply_async.side_effect = Exception("boom")
mock_consume_file.return_value = sig
with self.assertRaises(Exception):
bulk_edit.merge(doc_ids, delete_originals=True)
self.doc1.refresh_from_db()
self.assertEqual(self.doc1.archive_serial_number, 111)
@mock.patch("documents.bulk_edit.delete.si")
@mock.patch("documents.tasks.consume_file.s")
def test_merge_and_delete_originals_metadata_handoff(
self,
mock_consume_file,
mock_delete_documents,
):
"""
GIVEN:
- Existing documents with ASNs
WHEN:
- Merge with delete_originals=True and metadata_document_id set
THEN:
- Handoff ASN uses metadata document ASN
"""
doc_ids = [self.doc1.id, self.doc2.id]
self.doc1.archive_serial_number = 101
self.doc2.archive_serial_number = 202
self.doc1.save()
self.doc2.save()
result = bulk_edit.merge(
doc_ids,
metadata_document_id=self.doc2.id,
delete_originals=True,
)
self.assertEqual(result, "OK")
consume_file_args, _ = mock_consume_file.call_args
self.assertEqual(consume_file_args[1].asn, 202)
def test_restore_archive_serial_numbers_task(self):
"""
GIVEN:
- Existing document with no archive serial number
WHEN:
- Restore archive serial number task is called with backup data
THEN:
- Document archive serial number is restored
"""
self.doc1.archive_serial_number = 444
self.doc1.save()
Document.objects.filter(pk=self.doc1.id).update(archive_serial_number=None)
backup = {self.doc1.id: 444}
bulk_edit.restore_archive_serial_numbers_task(backup)
self.doc1.refresh_from_db()
self.assertEqual(self.doc1.archive_serial_number, 444)
@mock.patch("documents.tasks.consume_file.s") @mock.patch("documents.tasks.consume_file.s")
def test_merge_with_archive_fallback(self, mock_consume_file): def test_merge_with_archive_fallback(self, mock_consume_file):
""" """
@@ -727,6 +818,7 @@ class TestPDFActions(DirectoriesMixin, TestCase):
self.assertEqual(mock_consume_file.call_count, 2) self.assertEqual(mock_consume_file.call_count, 2)
consume_file_args, _ = mock_consume_file.call_args consume_file_args, _ = mock_consume_file.call_args
self.assertEqual(consume_file_args[1].title, "B (split 2)") self.assertEqual(consume_file_args[1].title, "B (split 2)")
self.assertIsNone(consume_file_args[1].asn)
self.assertEqual(result, "OK") self.assertEqual(result, "OK")
@@ -751,6 +843,8 @@ class TestPDFActions(DirectoriesMixin, TestCase):
""" """
doc_ids = [self.doc2.id] doc_ids = [self.doc2.id]
pages = [[1, 2], [3]] pages = [[1, 2], [3]]
self.doc2.archive_serial_number = 200
self.doc2.save()
result = bulk_edit.split(doc_ids, pages, delete_originals=True) result = bulk_edit.split(doc_ids, pages, delete_originals=True)
self.assertEqual(result, "OK") self.assertEqual(result, "OK")
@@ -768,6 +862,42 @@ class TestPDFActions(DirectoriesMixin, TestCase):
doc_ids, doc_ids,
) )
self.doc2.refresh_from_db()
self.assertIsNone(self.doc2.archive_serial_number)
@mock.patch("documents.bulk_edit.delete.si")
@mock.patch("documents.tasks.consume_file.s")
@mock.patch("documents.bulk_edit.chord")
def test_split_restore_on_failure(
self,
mock_chord,
mock_consume_file,
mock_delete_documents,
):
"""
GIVEN:
- Existing documents
WHEN:
- Split action with deleting documents is called with 1 document and 2 page groups
- Error occurs when queuing chord task
THEN:
- Archive serial numbers are restored
"""
doc_ids = [self.doc2.id]
pages = [[1, 2]]
self.doc2.archive_serial_number = 222
self.doc2.save()
sig = mock.Mock()
sig.apply_async.side_effect = Exception("boom")
mock_chord.return_value = sig
result = bulk_edit.split(doc_ids, pages, delete_originals=True)
self.assertEqual(result, "OK")
self.doc2.refresh_from_db()
self.assertEqual(self.doc2.archive_serial_number, 222)
@mock.patch("documents.tasks.consume_file.delay") @mock.patch("documents.tasks.consume_file.delay")
@mock.patch("pikepdf.Pdf.save") @mock.patch("pikepdf.Pdf.save")
def test_split_with_errors(self, mock_save_pdf, mock_consume_file): def test_split_with_errors(self, mock_save_pdf, mock_consume_file):
@@ -968,10 +1098,49 @@ class TestPDFActions(DirectoriesMixin, TestCase):
mock_chord.return_value.delay.return_value = None mock_chord.return_value.delay.return_value = None
doc_ids = [self.doc2.id] doc_ids = [self.doc2.id]
operations = [{"page": 1}, {"page": 2}] operations = [{"page": 1}, {"page": 2}]
self.doc2.archive_serial_number = 250
self.doc2.save()
result = bulk_edit.edit_pdf(doc_ids, operations, delete_original=True) result = bulk_edit.edit_pdf(doc_ids, operations, delete_original=True)
self.assertEqual(result, "OK") self.assertEqual(result, "OK")
mock_chord.assert_called_once() mock_chord.assert_called_once()
consume_file_args, _ = mock_consume_file.call_args
self.assertEqual(consume_file_args[1].asn, 250)
self.doc2.refresh_from_db()
self.assertIsNone(self.doc2.archive_serial_number)
@mock.patch("documents.bulk_edit.delete.si")
@mock.patch("documents.tasks.consume_file.s")
@mock.patch("documents.bulk_edit.chord")
def test_edit_pdf_restore_on_failure(
self,
mock_chord,
mock_consume_file,
mock_delete_documents,
):
"""
GIVEN:
- Existing document
WHEN:
- edit_pdf is called with delete_original=True
- Error occurs when queuing chord task
THEN:
- Archive serial numbers are restored
"""
doc_ids = [self.doc2.id]
operations = [{"page": 1}]
self.doc2.archive_serial_number = 333
self.doc2.save()
sig = mock.Mock()
sig.apply_async.side_effect = Exception("boom")
mock_chord.return_value = sig
with self.assertRaises(Exception):
bulk_edit.edit_pdf(doc_ids, operations, delete_original=True)
self.doc2.refresh_from_db()
self.assertEqual(self.doc2.archive_serial_number, 333)
@mock.patch("documents.tasks.update_document_content_maybe_archive_file.delay") @mock.patch("documents.tasks.update_document_content_maybe_archive_file.delay")
def test_edit_pdf_with_update_document(self, mock_update_document): def test_edit_pdf_with_update_document(self, mock_update_document):

View File

@@ -14,6 +14,7 @@ from django.test import override_settings
from django.utils import timezone from django.utils import timezone
from guardian.core import ObjectPermissionChecker from guardian.core import ObjectPermissionChecker
from documents.barcodes import BarcodePlugin
from documents.consumer import ConsumerError from documents.consumer import ConsumerError
from documents.data_models import DocumentMetadataOverrides from documents.data_models import DocumentMetadataOverrides
from documents.data_models import DocumentSource from documents.data_models import DocumentSource
@@ -412,14 +413,6 @@ class TestConsumer(
self.assertEqual(document.archive_serial_number, 123) self.assertEqual(document.archive_serial_number, 123)
self._assert_first_last_send_progress() self._assert_first_last_send_progress()
def testMetadataOverridesSkipAsnPropagation(self):
overrides = DocumentMetadataOverrides()
incoming = DocumentMetadataOverrides(skip_asn=True)
overrides.update(incoming)
self.assertTrue(overrides.skip_asn)
def testOverrideTitlePlaceholders(self): def testOverrideTitlePlaceholders(self):
c = Correspondent.objects.create(name="Correspondent Name") c = Correspondent.objects.create(name="Correspondent Name")
dt = DocumentType.objects.create(name="DocType Name") dt = DocumentType.objects.create(name="DocType Name")
@@ -1271,3 +1264,46 @@ class PostConsumeTestCase(DirectoriesMixin, GetConsumerMixin, TestCase):
r"sample\.pdf: Error while executing post-consume script: Command '\[.*\]' returned non-zero exit status \d+\.", r"sample\.pdf: Error while executing post-consume script: Command '\[.*\]' returned non-zero exit status \d+\.",
): ):
consumer.run_post_consume_script(doc) consumer.run_post_consume_script(doc)
class TestMetadataOverrides(TestCase):
def test_update_skip_asn_if_exists(self):
base = DocumentMetadataOverrides()
incoming = DocumentMetadataOverrides(skip_asn_if_exists=True)
base.update(incoming)
self.assertTrue(base.skip_asn_if_exists)
class TestBarcodeApplyDetectedASN(TestCase):
"""
GIVEN:
- Existing Documents with ASN 123
WHEN:
- A BarcodePlugin which detected an ASN
THEN:
- If skip_asn_if_exists is set, and ASN exists, do not set ASN
- If skip_asn_if_exists is set, and ASN does not exist, set ASN
"""
def test_apply_detected_asn_skips_existing_when_flag_set(self):
doc = Document.objects.create(
checksum="X1",
title="D1",
archive_serial_number=123,
)
metadata = DocumentMetadataOverrides(skip_asn_if_exists=True)
plugin = BarcodePlugin(
input_doc=mock.Mock(),
metadata=metadata,
status_mgr=mock.Mock(),
base_tmp_dir=tempfile.gettempdir(),
task_id="test-task",
)
plugin._apply_detected_asn(123)
self.assertIsNone(plugin.metadata.asn)
doc.hard_delete()
plugin._apply_detected_asn(123)
self.assertEqual(plugin.metadata.asn, 123)

View File

@@ -0,0 +1,538 @@
import datetime
from zoneinfo import ZoneInfo
import pytest
from pytest_django.fixtures import SettingsWrapper
from documents.parsers import parse_date
from documents.parsers import parse_date_generator
@pytest.mark.django_db()
class TestDate:
def test_date_format_1(self):
text = "lorem ipsum 130218 lorem ipsum"
assert parse_date("", text) is None
def test_date_format_2(self):
text = "lorem ipsum 2018 lorem ipsum"
assert parse_date("", text) is None
def test_date_format_3(self):
text = "lorem ipsum 20180213 lorem ipsum"
assert parse_date("", text) is None
def test_date_format_4(self, settings_timezone: ZoneInfo):
text = "lorem ipsum 13.02.2018 lorem ipsum"
date = parse_date("", text)
assert date == datetime.datetime(2018, 2, 13, 0, 0, tzinfo=settings_timezone)
def test_date_format_5(self, settings_timezone: ZoneInfo):
text = "lorem ipsum 130218, 2018, 20180213 and lorem 13.02.2018 lorem ipsum"
date = parse_date("", text)
assert date == datetime.datetime(2018, 2, 13, 0, 0, tzinfo=settings_timezone)
def test_date_format_6(self):
text = (
"lorem ipsum\n"
"Wohnort\n"
"3100\n"
"IBAN\n"
"AT87 4534\n"
"1234\n"
"1234 5678\n"
"BIC\n"
"lorem ipsum"
)
assert parse_date("", text) is None
def test_date_format_7(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
settings.DATE_PARSER_LANGUAGES = ["de"]
text = "lorem ipsum\nMärz 2019\nlorem ipsum"
date = parse_date("", text)
assert date == datetime.datetime(2019, 3, 1, 0, 0, tzinfo=settings_timezone)
def test_date_format_8(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
settings.DATE_PARSER_LANGUAGES = ["de"]
text = (
"lorem ipsum\n"
"Wohnort\n"
"3100\n"
"IBAN\n"
"AT87 4534\n"
"1234\n"
"1234 5678\n"
"BIC\n"
"lorem ipsum\n"
"März 2020"
)
assert parse_date("", text) == datetime.datetime(
2020,
3,
1,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_9(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
settings.DATE_PARSER_LANGUAGES = ["de"]
text = "lorem ipsum\n27. Nullmonth 2020\nMärz 2020\nlorem ipsum"
assert parse_date("", text) == datetime.datetime(
2020,
3,
1,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_10(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 22-MAR-2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
22,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_11(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 22 MAR 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
22,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_12(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 22/MAR/2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
22,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_13(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 22.MAR.2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
22,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_14(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 22.MAR 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
22,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_15(self):
text = "Customer Number Currency 22.MAR.22 Credit Card 1934829304"
assert parse_date("", text) is None
def test_date_format_16(self):
text = "Customer Number Currency 22.MAR,22 Credit Card 1934829304"
assert parse_date("", text) is None
def test_date_format_17(self):
text = "Customer Number Currency 22,MAR,2022 Credit Card 1934829304"
assert parse_date("", text) is None
def test_date_format_18(self):
text = "Customer Number Currency 22 MAR,2022 Credit Card 1934829304"
assert parse_date("", text) is None
def test_date_format_19(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 21st MAR 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
21,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_20(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 22nd March 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
22,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_21(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 2nd MAR 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
2,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_22(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 23rd MAR 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
23,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_23(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 24th MAR 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
24,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_24(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 21-MAR-2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
21,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_25(self, settings_timezone: ZoneInfo):
text = "Customer Number Currency 25TH MAR 2022 Credit Card 1934829304"
assert parse_date("", text) == datetime.datetime(
2022,
3,
25,
0,
0,
tzinfo=settings_timezone,
)
def test_date_format_26(self, settings_timezone: ZoneInfo):
text = "CHASE 0 September 25, 2019 JPMorgan Chase Bank, NA. P0 Box 182051"
assert parse_date("", text) == datetime.datetime(
2019,
9,
25,
0,
0,
tzinfo=settings_timezone,
)
def test_crazy_date_past(self):
assert parse_date("", "01-07-0590 00:00:00") is None
def test_crazy_date_future(self):
assert parse_date("", "01-07-2350 00:00:00") is None
def test_crazy_date_with_spaces(self):
assert parse_date("", "20 408000l 2475") is None
def test_utf_month_names(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
settings.DATE_PARSER_LANGUAGES = ["fr", "de", "hr", "cs", "pl", "tr"]
assert parse_date("", "13 décembre 2023") == datetime.datetime(
2023,
12,
13,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "13 août 2022") == datetime.datetime(
2022,
8,
13,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "11 März 2020") == datetime.datetime(
2020,
3,
11,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "17. ožujka 2018.") == datetime.datetime(
2018,
3,
17,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "1. veljače 2016.") == datetime.datetime(
2016,
2,
1,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "15. února 1985") == datetime.datetime(
1985,
2,
15,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "30. září 2011") == datetime.datetime(
2011,
9,
30,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "28. května 1990") == datetime.datetime(
1990,
5,
28,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "1. grudzień 1997") == datetime.datetime(
1997,
12,
1,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "17 Şubat 2024") == datetime.datetime(
2024,
2,
17,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "30 Ağustos 2012") == datetime.datetime(
2012,
8,
30,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "17 Eylül 2000") == datetime.datetime(
2000,
9,
17,
0,
0,
tzinfo=settings_timezone,
)
assert parse_date("", "5. október 1992") == datetime.datetime(
1992,
10,
5,
0,
0,
tzinfo=settings_timezone,
)
def test_multiple_dates(self, settings_timezone: ZoneInfo):
text = """This text has multiple dates.
For example 02.02.2018, 22 July 2022 and December 2021.
But not 24-12-9999 because it's in the future..."""
dates = list(parse_date_generator("", text))
assert dates == [
datetime.datetime(2018, 2, 2, 0, 0, tzinfo=settings_timezone),
datetime.datetime(
2022,
7,
22,
0,
0,
tzinfo=settings_timezone,
),
datetime.datetime(
2021,
12,
1,
0,
0,
tzinfo=settings_timezone,
),
]
def test_filename_date_parse_valid_ymd(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
"""
GIVEN:
- Date parsing from the filename is enabled
- Filename date format is with Year Month Day (YMD)
- Filename contains date matching the format
THEN:
- Should parse the date from the filename
"""
settings.FILENAME_DATE_ORDER = "YMD"
assert parse_date(
"/tmp/Scan-2022-04-01.pdf",
"No date in here",
) == datetime.datetime(2022, 4, 1, 0, 0, tzinfo=settings_timezone)
def test_filename_date_parse_valid_dmy(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
"""
GIVEN:
- Date parsing from the filename is enabled
- Filename date format is with Day Month Year (DMY)
- Filename contains date matching the format
THEN:
- Should parse the date from the filename
"""
settings.FILENAME_DATE_ORDER = "DMY"
assert parse_date(
"/tmp/Scan-10.01.2021.pdf",
"No date in here",
) == datetime.datetime(2021, 1, 10, 0, 0, tzinfo=settings_timezone)
def test_filename_date_parse_invalid(self, settings: SettingsWrapper):
"""
GIVEN:
- Date parsing from the filename is enabled
- Filename includes no date
- File content includes no date
THEN:
- No date is parsed
"""
settings.FILENAME_DATE_ORDER = "YMD"
assert parse_date("/tmp/20 408000l 2475 - test.pdf", "No date in here") is None
def test_filename_date_ignored_use_content(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
"""
GIVEN:
- Date parsing from the filename is enabled
- Filename date format is with Day Month Year (YMD)
- Date order is Day Month Year (DMY, the default)
- Filename contains date matching the format
- Filename date is an ignored date
- File content includes a date
THEN:
- Should parse the date from the content not filename
"""
settings.FILENAME_DATE_ORDER = "YMD"
settings.IGNORE_DATES = (datetime.date(2022, 4, 1),)
assert parse_date(
"/tmp/Scan-2022-04-01.pdf",
"The matching date is 24.03.2022",
) == datetime.datetime(2022, 3, 24, 0, 0, tzinfo=settings_timezone)
def test_ignored_dates_default_order(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
"""
GIVEN:
- Ignore dates have been set
- File content includes ignored dates
- File content includes 1 non-ignored date
THEN:
- Should parse the date non-ignored date from content
"""
settings.IGNORE_DATES = (datetime.date(2019, 11, 3), datetime.date(2020, 1, 17))
text = "lorem ipsum 110319, 20200117 and lorem 13.02.2018 lorem ipsum"
assert parse_date("", text) == datetime.datetime(
2018,
2,
13,
0,
0,
tzinfo=settings_timezone,
)
def test_ignored_dates_order_ymd(
self,
settings: SettingsWrapper,
settings_timezone: ZoneInfo,
):
"""
GIVEN:
- Ignore dates have been set
- Date order is Year Month Date (YMD)
- File content includes ignored dates
- File content includes 1 non-ignored date
THEN:
- Should parse the date non-ignored date from content
"""
settings.FILENAME_DATE_ORDER = "YMD"
settings.IGNORE_DATES = (datetime.date(2019, 11, 3), datetime.date(2020, 1, 17))
text = "lorem ipsum 190311, 20200117 and lorem 13.02.2018 lorem ipsum"
assert parse_date("", text) == datetime.datetime(
2018,
2,
13,
0,
0,
tzinfo=settings_timezone,
)

View File

@@ -148,6 +148,7 @@ from documents.models import Workflow
from documents.models import WorkflowAction from documents.models import WorkflowAction
from documents.models import WorkflowTrigger from documents.models import WorkflowTrigger
from documents.parsers import get_parser_class_for_mime_type from documents.parsers import get_parser_class_for_mime_type
from documents.parsers import parse_date_generator
from documents.permissions import AcknowledgeTasksPermissions from documents.permissions import AcknowledgeTasksPermissions
from documents.permissions import PaperlessAdminPermissions from documents.permissions import PaperlessAdminPermissions
from documents.permissions import PaperlessNotePermissions from documents.permissions import PaperlessNotePermissions
@@ -157,7 +158,6 @@ from documents.permissions import get_document_count_filter_for_user
from documents.permissions import get_objects_for_user_owner_aware from documents.permissions import get_objects_for_user_owner_aware
from documents.permissions import has_perms_owner_aware from documents.permissions import has_perms_owner_aware
from documents.permissions import set_permissions_for_object from documents.permissions import set_permissions_for_object
from documents.plugins.date_parsing import get_date_parser
from documents.schema import generate_object_with_permissions_schema from documents.schema import generate_object_with_permissions_schema
from documents.serialisers import AcknowledgeTasksViewSerializer from documents.serialisers import AcknowledgeTasksViewSerializer
from documents.serialisers import BulkDownloadSerializer from documents.serialisers import BulkDownloadSerializer
@@ -1023,8 +1023,7 @@ class DocumentViewSet(
dates = [] dates = []
if settings.NUMBER_OF_SUGGESTED_DATES > 0: if settings.NUMBER_OF_SUGGESTED_DATES > 0:
with get_date_parser() as date_parser: gen = parse_date_generator(doc.filename, doc.content)
gen = date_parser.parse(doc.filename, doc.content)
dates = sorted( dates = sorted(
{ {
i i

View File

@@ -2,7 +2,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: paperless-ngx\n" "Project-Id-Version: paperless-ngx\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2026-01-27 18:56+0000\n" "POT-Creation-Date: 2026-01-29 16:06+0000\n"
"PO-Revision-Date: 2022-02-17 04:17\n" "PO-Revision-Date: 2022-02-17 04:17\n"
"Last-Translator: \n" "Last-Translator: \n"
"Language-Team: English\n" "Language-Team: English\n"
@@ -1786,35 +1786,39 @@ msgstr ""
msgid "Sets the tag barcode mapping" msgid "Sets the tag barcode mapping"
msgstr "" msgstr ""
#: paperless/models.py:287 #: paperless/models.py:284
msgid "Enables AI features" msgid "Enables splitting on tag barcodes"
msgstr "" msgstr ""
#: paperless/models.py:293 #: paperless/models.py:293
msgid "Enables AI features"
msgstr ""
#: paperless/models.py:299
msgid "Sets the LLM embedding backend" msgid "Sets the LLM embedding backend"
msgstr "" msgstr ""
#: paperless/models.py:301 #: paperless/models.py:307
msgid "Sets the LLM embedding model" msgid "Sets the LLM embedding model"
msgstr "" msgstr ""
#: paperless/models.py:308 #: paperless/models.py:314
msgid "Sets the LLM backend" msgid "Sets the LLM backend"
msgstr "" msgstr ""
#: paperless/models.py:316 #: paperless/models.py:322
msgid "Sets the LLM model" msgid "Sets the LLM model"
msgstr "" msgstr ""
#: paperless/models.py:323 #: paperless/models.py:329
msgid "Sets the LLM API key" msgid "Sets the LLM API key"
msgstr "" msgstr ""
#: paperless/models.py:330 #: paperless/models.py:336
msgid "Sets the LLM endpoint, optional" msgid "Sets the LLM endpoint, optional"
msgstr "" msgstr ""
#: paperless/models.py:337 #: paperless/models.py:343
msgid "paperless application settings" msgid "paperless application settings"
msgstr "" msgstr ""

View File

@@ -116,6 +116,7 @@ class BarcodeConfig(BaseConfig):
barcode_max_pages: int = dataclasses.field(init=False) barcode_max_pages: int = dataclasses.field(init=False)
barcode_enable_tag: bool = dataclasses.field(init=False) barcode_enable_tag: bool = dataclasses.field(init=False)
barcode_tag_mapping: dict[str, str] = dataclasses.field(init=False) barcode_tag_mapping: dict[str, str] = dataclasses.field(init=False)
barcode_tag_split: bool = dataclasses.field(init=False)
def __post_init__(self) -> None: def __post_init__(self) -> None:
app_config = self._get_config_instance() app_config = self._get_config_instance()
@@ -153,6 +154,9 @@ class BarcodeConfig(BaseConfig):
self.barcode_tag_mapping = ( self.barcode_tag_mapping = (
app_config.barcode_tag_mapping or settings.CONSUMER_TAG_BARCODE_MAPPING app_config.barcode_tag_mapping or settings.CONSUMER_TAG_BARCODE_MAPPING
) )
self.barcode_tag_split = (
app_config.barcode_tag_split or settings.CONSUMER_TAG_BARCODE_SPLIT
)
@dataclasses.dataclass @dataclasses.dataclass

View File

@@ -0,0 +1,21 @@
# Generated by Django 5.1.7 on 2025-12-15 21:30
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
("paperless", "0005_applicationconfiguration_ai_enabled_and_more"),
]
operations = [
migrations.AddField(
model_name="applicationconfiguration",
name="barcode_tag_split",
field=models.BooleanField(
null=True,
verbose_name="Enables splitting on tag barcodes",
),
),
]

View File

@@ -279,6 +279,12 @@ class ApplicationConfiguration(AbstractSingletonModel):
null=True, null=True,
) )
# PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT
barcode_tag_split = models.BooleanField(
verbose_name=_("Enables splitting on tag barcodes"),
null=True,
)
""" """
AI related settings AI related settings
""" """

View File

@@ -1149,6 +1149,10 @@ CONSUMER_TAG_BARCODE_MAPPING = dict(
), ),
) )
CONSUMER_TAG_BARCODE_SPLIT: Final[bool] = __get_boolean(
"PAPERLESS_CONSUMER_TAG_BARCODE_SPLIT",
)
CONSUMER_ENABLE_COLLATE_DOUBLE_SIDED: Final[bool] = __get_boolean( CONSUMER_ENABLE_COLLATE_DOUBLE_SIDED: Final[bool] = __get_boolean(
"PAPERLESS_CONSUMER_ENABLE_COLLATE_DOUBLE_SIDED", "PAPERLESS_CONSUMER_ENABLE_COLLATE_DOUBLE_SIDED",
) )

View File

@@ -12,6 +12,9 @@ from paperless_tika.parsers import TikaDocumentParser
reason="No Gotenberg/Tika servers to test with", reason="No Gotenberg/Tika servers to test with",
) )
@pytest.mark.django_db() @pytest.mark.django_db()
@pytest.mark.live
@pytest.mark.gotenberg
@pytest.mark.tika
class TestTikaParserAgainstServer: class TestTikaParserAgainstServer:
""" """
This test case tests the Tika parsing against a live tika server, This test case tests the Tika parsing against a live tika server,

View File

@@ -128,6 +128,8 @@ class TestTikaParser:
request = httpx_mock.get_request() request = httpx_mock.get_request()
assert request is not None
expected_field_name = "pdfa" expected_field_name = "pdfa"
content_type = request.headers["Content-Type"] content_type = request.headers["Content-Type"]