mirror of
https://github.com/paperless-ngx/paperless-ngx.git
synced 2025-07-26 18:14:37 -05:00
Compare commits
No commits in common. "9f3946d9383e7427f6fad1a29b52a2187d2406f2" and "a59fe0cb3c0b03f95f27f1761659d773d4f47e56" have entirely different histories.
9f3946d938
...
a59fe0cb3c
@ -1700,48 +1700,3 @@ password. All of these options come from their similarly-named [Django settings]
|
|||||||
#### [`PAPERLESS_EMAIL_USE_SSL=<bool>`](#PAPERLESS_EMAIL_USE_SSL) {#PAPERLESS_EMAIL_USE_SSL}
|
#### [`PAPERLESS_EMAIL_USE_SSL=<bool>`](#PAPERLESS_EMAIL_USE_SSL) {#PAPERLESS_EMAIL_USE_SSL}
|
||||||
|
|
||||||
: Defaults to false.
|
: Defaults to false.
|
||||||
|
|
||||||
## AI {#ai}
|
|
||||||
|
|
||||||
#### [`PAPERLESS_ENABLE_AI=<bool>`](#PAPERLESS_ENABLE_AI) {#PAPERLESS_ENABLE_AI}
|
|
||||||
|
|
||||||
: Enables the AI features in Paperless. This includes the AI-based
|
|
||||||
suggestions. This setting is required to be set to true in order to use the AI features.
|
|
||||||
|
|
||||||
Defaults to false.
|
|
||||||
|
|
||||||
#### [`PAPERLESS_AI_BACKEND=<str>`](#PAPERLESS_AI_BACKEND) {#PAPERLESS_AI_BACKEND}
|
|
||||||
|
|
||||||
: The AI backend to use. This can be either "openai" or "ollama". If set to "ollama", the AI
|
|
||||||
features will be run locally on your machine. If set to "openai", the AI features will be run
|
|
||||||
using the OpenAI API. This setting is required to be set to use the AI features.
|
|
||||||
|
|
||||||
Defaults to None.
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
The OpenAI API is a paid service. You will need to set up an OpenAI account and
|
|
||||||
will be charged for usage incurred by Paperless-ngx features and your document data
|
|
||||||
will (of course) be shared with OpenAI. Paperless-ngx does not endorse the use of the
|
|
||||||
OpenAI API in any way.
|
|
||||||
|
|
||||||
Refer to the OpenAI terms of service, and use at your own risk.
|
|
||||||
|
|
||||||
#### [`PAPERLESS_LLM_MODEL=<str>`](#PAPERLESS_LLM_MODEL) {#PAPERLESS_LLM_MODEL}
|
|
||||||
|
|
||||||
: The model to use for the AI backend, i.e. "gpt-3.5-turbo", "gpt-4" or any of the models supported by the
|
|
||||||
current backend. This setting is required to be set to use the AI features.
|
|
||||||
|
|
||||||
Defaults to None.
|
|
||||||
|
|
||||||
#### [`PAPERLESS_LLM_API_KEY=<str>`](#PAPERLESS_LLM_API_KEY) {#PAPERLESS_LLM_API_KEY}
|
|
||||||
|
|
||||||
: The API key to use for the AI backend. This is required for the OpenAI backend only.
|
|
||||||
|
|
||||||
Defaults to None.
|
|
||||||
|
|
||||||
#### [`PAPERLESS_LLM_URL=<str>`](#PAPERLESS_LLM_URL) {#PAPERLESS_LLM_URL}
|
|
||||||
|
|
||||||
: The URL to use for the AI backend. This is required for the Ollama backend only.
|
|
||||||
|
|
||||||
Defaults to None.
|
|
||||||
|
@ -25,12 +25,11 @@ physical documents into a searchable online archive so you can keep, well, _less
|
|||||||
## Features
|
## Features
|
||||||
|
|
||||||
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
|
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
|
||||||
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so.
|
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way.
|
||||||
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
|
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
|
||||||
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
|
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
|
||||||
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
|
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
|
||||||
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
|
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
|
||||||
- **New**: Paperless-ngx can now leverage AI (Large Language Models or LLMs) for document suggestions. This is an optional feature that can be enabled (and is disabled by default).
|
|
||||||
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, Powerpoint, and LibreOffice equivalents)[^1] and more.
|
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, Powerpoint, and LibreOffice equivalents)[^1] and more.
|
||||||
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
|
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
|
||||||
- **Beautiful, modern web application** that features:
|
- **Beautiful, modern web application** that features:
|
||||||
|
@ -260,14 +260,6 @@ Once setup, navigating to the email settings page in Paperless-ngx will allow yo
|
|||||||
You can also submit a document using the REST API, see [POSTing documents](api.md#file-uploads)
|
You can also submit a document using the REST API, see [POSTing documents](api.md#file-uploads)
|
||||||
for details.
|
for details.
|
||||||
|
|
||||||
## Document Suggestions
|
|
||||||
|
|
||||||
Paperless-ngx can suggest tags, correspondents, document types and storage paths for documents based on the content of the document. This is done using a machine learning model that is trained on the documents in your database. The suggestions are shown in the document detail page and can be accepted or rejected by the user.
|
|
||||||
|
|
||||||
### AI-Enhanced Suggestions
|
|
||||||
|
|
||||||
If enabled, Paperless-ngx can use an AI LLM model to suggest document titles, dates, tags, correspondents and document types for documents. This feature will always be "opt-in" and does not disable the existing suggestion system. Currently, both remote (via the OpenAI API) and local (via Ollama) models are supported, see [configuration](configuration.md#ai) for details.
|
|
||||||
|
|
||||||
## Sharing documents from Paperless-ngx
|
## Sharing documents from Paperless-ngx
|
||||||
|
|
||||||
Paperless-ngx supports sharing documents with other users by assigning them [permissions](#object-permissions)
|
Paperless-ngx supports sharing documents with other users by assigning them [permissions](#object-permissions)
|
||||||
|
@ -35,7 +35,6 @@
|
|||||||
@case (ConfigOptionType.String) { <pngx-input-text [formControlName]="option.key" [error]="errors[option.key]"></pngx-input-text> }
|
@case (ConfigOptionType.String) { <pngx-input-text [formControlName]="option.key" [error]="errors[option.key]"></pngx-input-text> }
|
||||||
@case (ConfigOptionType.JSON) { <pngx-input-text [formControlName]="option.key" [error]="errors[option.key]"></pngx-input-text> }
|
@case (ConfigOptionType.JSON) { <pngx-input-text [formControlName]="option.key" [error]="errors[option.key]"></pngx-input-text> }
|
||||||
@case (ConfigOptionType.File) { <pngx-input-file [formControlName]="option.key" (upload)="uploadFile($event, option.key)" [error]="errors[option.key]"></pngx-input-file> }
|
@case (ConfigOptionType.File) { <pngx-input-file [formControlName]="option.key" (upload)="uploadFile($event, option.key)" [error]="errors[option.key]"></pngx-input-file> }
|
||||||
@case (ConfigOptionType.Password) { <pngx-input-password [formControlName]="option.key" [error]="errors[option.key]"></pngx-input-password> }
|
|
||||||
}
|
}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -29,7 +29,6 @@ import { SettingsService } from 'src/app/services/settings.service'
|
|||||||
import { ToastService } from 'src/app/services/toast.service'
|
import { ToastService } from 'src/app/services/toast.service'
|
||||||
import { FileComponent } from '../../common/input/file/file.component'
|
import { FileComponent } from '../../common/input/file/file.component'
|
||||||
import { NumberComponent } from '../../common/input/number/number.component'
|
import { NumberComponent } from '../../common/input/number/number.component'
|
||||||
import { PasswordComponent } from '../../common/input/password/password.component'
|
|
||||||
import { SelectComponent } from '../../common/input/select/select.component'
|
import { SelectComponent } from '../../common/input/select/select.component'
|
||||||
import { SwitchComponent } from '../../common/input/switch/switch.component'
|
import { SwitchComponent } from '../../common/input/switch/switch.component'
|
||||||
import { TextComponent } from '../../common/input/text/text.component'
|
import { TextComponent } from '../../common/input/text/text.component'
|
||||||
@ -47,7 +46,6 @@ import { LoadingComponentWithPermissions } from '../../loading-component/loading
|
|||||||
TextComponent,
|
TextComponent,
|
||||||
NumberComponent,
|
NumberComponent,
|
||||||
FileComponent,
|
FileComponent,
|
||||||
PasswordComponent,
|
|
||||||
AsyncPipe,
|
AsyncPipe,
|
||||||
NgbNavModule,
|
NgbNavModule,
|
||||||
FormsModule,
|
FormsModule,
|
||||||
|
@ -1,11 +1,5 @@
|
|||||||
<div class="mb-3" [class.pb-3]="error">
|
<div class="mb-3">
|
||||||
<div class="row">
|
<label class="form-label" [for]="inputId">{{title}}</label>
|
||||||
<div class="d-flex align-items-center position-relative hidden-button-container" [class.col-md-3]="horizontal">
|
|
||||||
@if (title) {
|
|
||||||
<label class="form-label" [class.mb-md-0]="horizontal" [for]="inputId">{{title}}</label>
|
|
||||||
}
|
|
||||||
</div>
|
|
||||||
<div class="position-relative" [class.col-md-9]="horizontal">
|
|
||||||
<div class="input-group" [class.is-invalid]="error">
|
<div class="input-group" [class.is-invalid]="error">
|
||||||
<input #inputField [type]="showReveal && textVisible ? 'text' : 'password'" class="form-control" [class.is-invalid]="error" [id]="inputId" [(ngModel)]="value" (focus)="onFocus()" (focusout)="onFocusOut()" (change)="onChange(value)" [disabled]="disabled" [autocomplete]="autocomplete">
|
<input #inputField [type]="showReveal && textVisible ? 'text' : 'password'" class="form-control" [class.is-invalid]="error" [id]="inputId" [(ngModel)]="value" (focus)="onFocus()" (focusout)="onFocusOut()" (change)="onChange(value)" [disabled]="disabled" [autocomplete]="autocomplete">
|
||||||
@if (showReveal) {
|
@if (showReveal) {
|
||||||
@ -20,5 +14,4 @@
|
|||||||
@if (hint) {
|
@if (hint) {
|
||||||
<small class="form-text text-muted" [innerHTML]="hint | safeHtml"></small>
|
<small class="form-text text-muted" [innerHTML]="hint | safeHtml"></small>
|
||||||
}
|
}
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
<div class="btn-group">
|
<div class="btn-group">
|
||||||
<button type="button" class="btn btn-sm btn-outline-primary" (click)="clickSuggest()" [disabled]="loading || (suggestions && !aiEnabled)">
|
<button type="button" class="btn btn-sm btn-outline-primary" (click)="clickSuggest()" [disabled]="loading">
|
||||||
@if (loading) {
|
@if (loading) {
|
||||||
<div class="spinner-border spinner-border-sm" role="status"></div>
|
<div class="spinner-border spinner-border-sm" role="status"></div>
|
||||||
} @else {
|
} @else {
|
||||||
@ -10,20 +10,14 @@
|
|||||||
<span class="badge bg-primary ms-2">{{ totalSuggestions }}</span>
|
<span class="badge bg-primary ms-2">{{ totalSuggestions }}</span>
|
||||||
}
|
}
|
||||||
</button>
|
</button>
|
||||||
|
|
||||||
@if (aiEnabled) {
|
|
||||||
<div class="btn-group" ngbDropdown #dropdown="ngbDropdown" [popperOptions]="popperOptions">
|
<div class="btn-group" ngbDropdown #dropdown="ngbDropdown" [popperOptions]="popperOptions">
|
||||||
|
|
||||||
<button type="button" class="btn btn-sm btn-outline-primary" ngbDropdownToggle [disabled]="loading || !suggestions" aria-expanded="false" aria-controls="suggestionsDropdown" aria-label="Suggestions dropdown">
|
<button type="button" class="btn btn-sm btn-outline-primary" ngbDropdownToggle [disabled]="loading || !suggestions" aria-expanded="false" aria-controls="suggestionsDropdown" aria-label="Suggestions dropdown">
|
||||||
<span class="visually-hidden" i18n>Show suggestions</span>
|
<span class="visually-hidden" i18n>Show suggestions</span>
|
||||||
</button>
|
</button>
|
||||||
|
|
||||||
<div ngbDropdownMenu aria-labelledby="suggestionsDropdown" class="shadow suggestions-dropdown">
|
<div ngbDropdownMenu aria-labelledby="suggestionsDropdown" class="shadow suggestions-dropdown">
|
||||||
<div class="list-group list-group-flush small pb-0">
|
<div class="list-group list-group-flush small">
|
||||||
@if (!suggestions?.suggested_tags && !suggestions?.suggested_document_types && !suggestions?.suggested_correspondents) {
|
|
||||||
<div class="list-group-item text-muted fst-italic">
|
|
||||||
<small class="text-muted small fst-italic" i18n>No novel suggestions</small>
|
|
||||||
</div>
|
|
||||||
}
|
|
||||||
@if (suggestions?.suggested_tags.length > 0) {
|
@if (suggestions?.suggested_tags.length > 0) {
|
||||||
<small class="list-group-item text-uppercase text-muted small">Tags</small>
|
<small class="list-group-item text-uppercase text-muted small">Tags</small>
|
||||||
@for (tag of suggestions.suggested_tags; track tag) {
|
@for (tag of suggestions.suggested_tags; track tag) {
|
||||||
@ -45,5 +39,4 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
}
|
|
||||||
</div>
|
</div>
|
||||||
|
@ -38,8 +38,6 @@ describe('SuggestionsDropdownComponent', () => {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it('should toggle dropdown when clickSuggest is called and suggestions are not null', () => {
|
it('should toggle dropdown when clickSuggest is called and suggestions are not null', () => {
|
||||||
component.aiEnabled = true
|
|
||||||
fixture.detectChanges()
|
|
||||||
component.suggestions = {
|
component.suggestions = {
|
||||||
suggested_correspondents: [],
|
suggested_correspondents: [],
|
||||||
suggested_tags: [],
|
suggested_tags: [],
|
||||||
|
@ -24,9 +24,6 @@ export class SuggestionsDropdownComponent {
|
|||||||
@Input()
|
@Input()
|
||||||
suggestions: DocumentSuggestions = null
|
suggestions: DocumentSuggestions = null
|
||||||
|
|
||||||
@Input()
|
|
||||||
aiEnabled: boolean = false
|
|
||||||
|
|
||||||
@Input()
|
@Input()
|
||||||
loading: boolean = false
|
loading: boolean = false
|
||||||
|
|
||||||
@ -50,7 +47,7 @@ export class SuggestionsDropdownComponent {
|
|||||||
if (!this.suggestions) {
|
if (!this.suggestions) {
|
||||||
this.getSuggestions.emit(this)
|
this.getSuggestions.emit(this)
|
||||||
} else {
|
} else {
|
||||||
this.dropdown?.toggle()
|
this.dropdown.toggle()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -115,7 +115,6 @@
|
|||||||
[disabled]="!userCanEdit || suggestionsLoading"
|
[disabled]="!userCanEdit || suggestionsLoading"
|
||||||
[loading]="suggestionsLoading"
|
[loading]="suggestionsLoading"
|
||||||
[suggestions]="suggestions"
|
[suggestions]="suggestions"
|
||||||
[aiEnabled]="aiEnabled"
|
|
||||||
(getSuggestions)="getSuggestions()"
|
(getSuggestions)="getSuggestions()"
|
||||||
(addTag)="createTag($event)"
|
(addTag)="createTag($event)"
|
||||||
(addDocumentType)="createDocumentType($event)"
|
(addDocumentType)="createDocumentType($event)"
|
||||||
|
@ -299,10 +299,6 @@ export class DocumentDetailComponent
|
|||||||
return this.settings.get(SETTINGS_KEYS.USE_NATIVE_PDF_VIEWER)
|
return this.settings.get(SETTINGS_KEYS.USE_NATIVE_PDF_VIEWER)
|
||||||
}
|
}
|
||||||
|
|
||||||
get aiEnabled(): boolean {
|
|
||||||
return this.settings.get(SETTINGS_KEYS.AI_ENABLED)
|
|
||||||
}
|
|
||||||
|
|
||||||
get archiveContentRenderType(): ContentRenderType {
|
get archiveContentRenderType(): ContentRenderType {
|
||||||
return this.document?.archived_file_name
|
return this.document?.archived_file_name
|
||||||
? this.getRenderType('application/pdf')
|
? this.getRenderType('application/pdf')
|
||||||
|
@ -44,18 +44,11 @@ export enum ConfigOptionType {
|
|||||||
Boolean = 'boolean',
|
Boolean = 'boolean',
|
||||||
JSON = 'json',
|
JSON = 'json',
|
||||||
File = 'file',
|
File = 'file',
|
||||||
Password = 'password',
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export const ConfigCategory = {
|
export const ConfigCategory = {
|
||||||
General: $localize`General Settings`,
|
General: $localize`General Settings`,
|
||||||
OCR: $localize`OCR Settings`,
|
OCR: $localize`OCR Settings`,
|
||||||
AI: $localize`AI Settings`,
|
|
||||||
}
|
|
||||||
|
|
||||||
export const LLMBackendConfig = {
|
|
||||||
OPENAI: 'openai',
|
|
||||||
OLLAMA: 'ollama',
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface ConfigOption {
|
export interface ConfigOption {
|
||||||
@ -187,42 +180,6 @@ export const PaperlessConfigOptions: ConfigOption[] = [
|
|||||||
config_key: 'PAPERLESS_APP_TITLE',
|
config_key: 'PAPERLESS_APP_TITLE',
|
||||||
category: ConfigCategory.General,
|
category: ConfigCategory.General,
|
||||||
},
|
},
|
||||||
{
|
|
||||||
key: 'ai_enabled',
|
|
||||||
title: $localize`AI Enabled`,
|
|
||||||
type: ConfigOptionType.Boolean,
|
|
||||||
config_key: 'PAPERLESS_AI_ENABLED',
|
|
||||||
category: ConfigCategory.AI,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
key: 'llm_backend',
|
|
||||||
title: $localize`LLM Backend`,
|
|
||||||
type: ConfigOptionType.Select,
|
|
||||||
choices: mapToItems(LLMBackendConfig),
|
|
||||||
config_key: 'PAPERLESS_LLM_BACKEND',
|
|
||||||
category: ConfigCategory.AI,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
key: 'llm_model',
|
|
||||||
title: $localize`LLM Model`,
|
|
||||||
type: ConfigOptionType.String,
|
|
||||||
config_key: 'PAPERLESS_LLM_MODEL',
|
|
||||||
category: ConfigCategory.AI,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
key: 'llm_api_key',
|
|
||||||
title: $localize`LLM API Key`,
|
|
||||||
type: ConfigOptionType.Password,
|
|
||||||
config_key: 'PAPERLESS_LLM_API_KEY',
|
|
||||||
category: ConfigCategory.AI,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
key: 'llm_url',
|
|
||||||
title: $localize`LLM URL`,
|
|
||||||
type: ConfigOptionType.String,
|
|
||||||
config_key: 'PAPERLESS_LLM_URL',
|
|
||||||
category: ConfigCategory.AI,
|
|
||||||
},
|
|
||||||
]
|
]
|
||||||
|
|
||||||
export interface PaperlessConfig extends ObjectWithId {
|
export interface PaperlessConfig extends ObjectWithId {
|
||||||
@ -241,9 +198,4 @@ export interface PaperlessConfig extends ObjectWithId {
|
|||||||
user_args: object
|
user_args: object
|
||||||
app_logo: string
|
app_logo: string
|
||||||
app_title: string
|
app_title: string
|
||||||
ai_enabled: boolean
|
|
||||||
llm_backend: string
|
|
||||||
llm_model: string
|
|
||||||
llm_api_key: string
|
|
||||||
llm_url: string
|
|
||||||
}
|
}
|
||||||
|
@ -73,7 +73,6 @@ export const SETTINGS_KEYS = {
|
|||||||
GMAIL_OAUTH_URL: 'gmail_oauth_url',
|
GMAIL_OAUTH_URL: 'gmail_oauth_url',
|
||||||
OUTLOOK_OAUTH_URL: 'outlook_oauth_url',
|
OUTLOOK_OAUTH_URL: 'outlook_oauth_url',
|
||||||
EMAIL_ENABLED: 'email_enabled',
|
EMAIL_ENABLED: 'email_enabled',
|
||||||
AI_ENABLED: 'ai_enabled',
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export const SETTINGS: UiSetting[] = [
|
export const SETTINGS: UiSetting[] = [
|
||||||
@ -277,9 +276,4 @@ export const SETTINGS: UiSetting[] = [
|
|||||||
type: 'string',
|
type: 'string',
|
||||||
default: 'page-width', // ZoomSetting from 'document-detail.component'
|
default: 'page-width', // ZoomSetting from 'document-detail.component'
|
||||||
},
|
},
|
||||||
{
|
|
||||||
key: SETTINGS_KEYS.AI_ENABLED,
|
|
||||||
type: 'boolean',
|
|
||||||
default: false,
|
|
||||||
},
|
|
||||||
]
|
]
|
||||||
|
@ -31,9 +31,10 @@ class TestApiAppConfig(DirectoriesMixin, APITestCase):
|
|||||||
response = self.client.get(self.ENDPOINT, format="json")
|
response = self.client.get(self.ENDPOINT, format="json")
|
||||||
|
|
||||||
self.assertEqual(response.status_code, status.HTTP_200_OK)
|
self.assertEqual(response.status_code, status.HTTP_200_OK)
|
||||||
self.maxDiff = None
|
|
||||||
self.assertDictEqual(
|
self.assertEqual(
|
||||||
response.data[0],
|
json.dumps(response.data[0]),
|
||||||
|
json.dumps(
|
||||||
{
|
{
|
||||||
"id": 1,
|
"id": 1,
|
||||||
"user_args": None,
|
"user_args": None,
|
||||||
@ -51,12 +52,8 @@ class TestApiAppConfig(DirectoriesMixin, APITestCase):
|
|||||||
"color_conversion_strategy": None,
|
"color_conversion_strategy": None,
|
||||||
"app_title": None,
|
"app_title": None,
|
||||||
"app_logo": None,
|
"app_logo": None,
|
||||||
"ai_enabled": False,
|
|
||||||
"llm_backend": None,
|
|
||||||
"llm_model": None,
|
|
||||||
"llm_api_key": None,
|
|
||||||
"llm_url": None,
|
|
||||||
},
|
},
|
||||||
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_api_get_ui_settings_with_config(self):
|
def test_api_get_ui_settings_with_config(self):
|
||||||
|
@ -47,7 +47,6 @@ class TestApiUiSettings(DirectoriesMixin, APITestCase):
|
|||||||
"backend_setting": "default",
|
"backend_setting": "default",
|
||||||
},
|
},
|
||||||
"email_enabled": False,
|
"email_enabled": False,
|
||||||
"ai_enabled": False,
|
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -177,7 +177,6 @@ from paperless.ai.matching import match_document_types_by_name
|
|||||||
from paperless.ai.matching import match_storage_paths_by_name
|
from paperless.ai.matching import match_storage_paths_by_name
|
||||||
from paperless.ai.matching import match_tags_by_name
|
from paperless.ai.matching import match_tags_by_name
|
||||||
from paperless.celery import app as celery_app
|
from paperless.celery import app as celery_app
|
||||||
from paperless.config import AIConfig
|
|
||||||
from paperless.config import GeneralConfig
|
from paperless.config import GeneralConfig
|
||||||
from paperless.db import GnuPG
|
from paperless.db import GnuPG
|
||||||
from paperless.serialisers import GroupSerializer
|
from paperless.serialisers import GroupSerializer
|
||||||
@ -739,12 +738,10 @@ class DocumentViewSet(
|
|||||||
):
|
):
|
||||||
return HttpResponseForbidden("Insufficient permissions")
|
return HttpResponseForbidden("Insufficient permissions")
|
||||||
|
|
||||||
ai_config = AIConfig()
|
if settings.AI_ENABLED:
|
||||||
|
|
||||||
if ai_config.ai_enabled:
|
|
||||||
cached_llm_suggestions = get_llm_suggestion_cache(
|
cached_llm_suggestions = get_llm_suggestion_cache(
|
||||||
doc.pk,
|
doc.pk,
|
||||||
backend=ai_config.llm_backend,
|
backend=settings.LLM_BACKEND,
|
||||||
)
|
)
|
||||||
|
|
||||||
if cached_llm_suggestions:
|
if cached_llm_suggestions:
|
||||||
@ -795,7 +792,7 @@ class DocumentViewSet(
|
|||||||
"dates": llm_suggestions.get("dates", []),
|
"dates": llm_suggestions.get("dates", []),
|
||||||
}
|
}
|
||||||
|
|
||||||
set_llm_suggestions_cache(doc.pk, resp_data, backend=ai_config.llm_backend)
|
set_llm_suggestions_cache(doc.pk, resp_data, backend=settings.LLM_BACKEND)
|
||||||
else:
|
else:
|
||||||
document_suggestions = get_suggestion_cache(doc.pk)
|
document_suggestions = get_suggestion_cache(doc.pk)
|
||||||
|
|
||||||
@ -2224,10 +2221,6 @@ class UiSettingsView(GenericAPIView):
|
|||||||
|
|
||||||
ui_settings["email_enabled"] = settings.EMAIL_ENABLED
|
ui_settings["email_enabled"] = settings.EMAIL_ENABLED
|
||||||
|
|
||||||
ai_config = AIConfig()
|
|
||||||
|
|
||||||
ui_settings["ai_enabled"] = ai_config.ai_enabled
|
|
||||||
|
|
||||||
user_resp = {
|
user_resp = {
|
||||||
"id": user.id,
|
"id": user.id,
|
||||||
"username": user.username,
|
"username": user.username,
|
||||||
|
@ -2,7 +2,7 @@ import json
|
|||||||
import logging
|
import logging
|
||||||
|
|
||||||
from documents.models import Document
|
from documents.models import Document
|
||||||
from paperless.ai.client import AIClient
|
from paperless.ai.client import run_llm_query
|
||||||
|
|
||||||
logger = logging.getLogger("paperless.ai.ai_classifier")
|
logger = logging.getLogger("paperless.ai.ai_classifier")
|
||||||
|
|
||||||
@ -49,8 +49,7 @@ def get_ai_document_classification(document: Document) -> dict:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
client = AIClient()
|
result = run_llm_query(prompt)
|
||||||
result = client.run_llm_query(prompt)
|
|
||||||
suggestions = parse_ai_classification_response(result)
|
suggestions = parse_ai_classification_response(result)
|
||||||
return suggestions or {}
|
return suggestions or {}
|
||||||
except Exception:
|
except Exception:
|
||||||
|
@ -1,45 +1,34 @@
|
|||||||
import logging
|
import logging
|
||||||
|
|
||||||
import httpx
|
import httpx
|
||||||
|
from django.conf import settings
|
||||||
from paperless.config import AIConfig
|
|
||||||
|
|
||||||
logger = logging.getLogger("paperless.ai.client")
|
logger = logging.getLogger("paperless.ai.client")
|
||||||
|
|
||||||
|
|
||||||
class AIClient:
|
def run_llm_query(prompt: str) -> str:
|
||||||
"""
|
|
||||||
A client for interacting with an LLM backend.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.settings = AIConfig()
|
|
||||||
|
|
||||||
def run_llm_query(self, prompt: str) -> str:
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"Running LLM query against %s with model %s",
|
"Running LLM query against %s with model %s",
|
||||||
self.settings.llm_backend,
|
settings.LLM_BACKEND,
|
||||||
self.settings.llm_model,
|
settings.LLM_MODEL,
|
||||||
)
|
)
|
||||||
match self.settings.llm_backend:
|
match settings.LLM_BACKEND:
|
||||||
case "openai":
|
case "openai":
|
||||||
result = self._run_openai_query(prompt)
|
result = _run_openai_query(prompt)
|
||||||
case "ollama":
|
case "ollama":
|
||||||
result = self._run_ollama_query(prompt)
|
result = _run_ollama_query(prompt)
|
||||||
case _:
|
case _:
|
||||||
raise ValueError(
|
raise ValueError(f"Unsupported LLM backend: {settings.LLM_BACKEND}")
|
||||||
f"Unsupported LLM backend: {self.settings.llm_backend}",
|
|
||||||
)
|
|
||||||
logger.debug("LLM query result: %s", result)
|
logger.debug("LLM query result: %s", result)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
def _run_ollama_query(self, prompt: str) -> str:
|
|
||||||
url = self.settings.llm_url or "http://localhost:11434"
|
def _run_ollama_query(prompt: str) -> str:
|
||||||
with httpx.Client(timeout=30.0) as client:
|
with httpx.Client(timeout=30.0) as client:
|
||||||
response = client.post(
|
response = client.post(
|
||||||
f"{url}/api/chat",
|
f"{settings.OLLAMA_URL}/api/chat",
|
||||||
json={
|
json={
|
||||||
"model": self.settings.llm_model,
|
"model": settings.LLM_MODEL,
|
||||||
"messages": [{"role": "user", "content": prompt}],
|
"messages": [{"role": "user", "content": prompt}],
|
||||||
"stream": False,
|
"stream": False,
|
||||||
},
|
},
|
||||||
@ -47,21 +36,20 @@ class AIClient:
|
|||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
return response.json()["message"]["content"]
|
return response.json()["message"]["content"]
|
||||||
|
|
||||||
def _run_openai_query(self, prompt: str) -> str:
|
|
||||||
if not self.settings.llm_api_key:
|
|
||||||
raise RuntimeError("PAPERLESS_LLM_API_KEY is not set")
|
|
||||||
|
|
||||||
url = self.settings.llm_url or "https://api.openai.com"
|
def _run_openai_query(prompt: str) -> str:
|
||||||
|
if not settings.LLM_API_KEY:
|
||||||
|
raise RuntimeError("PAPERLESS_LLM_API_KEY is not set")
|
||||||
|
|
||||||
with httpx.Client(timeout=30.0) as client:
|
with httpx.Client(timeout=30.0) as client:
|
||||||
response = client.post(
|
response = client.post(
|
||||||
f"{url}/v1/chat/completions",
|
f"{settings.OPENAI_URL}/v1/chat/completions",
|
||||||
headers={
|
headers={
|
||||||
"Authorization": f"Bearer {self.settings.llm_api_key}",
|
"Authorization": f"Bearer {settings.LLM_API_KEY}",
|
||||||
"Content-Type": "application/json",
|
"Content-Type": "application/json",
|
||||||
},
|
},
|
||||||
json={
|
json={
|
||||||
"model": self.settings.llm_model,
|
"model": settings.LLM_MODEL,
|
||||||
"messages": [{"role": "user", "content": prompt}],
|
"messages": [{"role": "user", "content": prompt}],
|
||||||
"temperature": 0.3,
|
"temperature": 0.3,
|
||||||
},
|
},
|
||||||
|
@ -114,25 +114,3 @@ class GeneralConfig(BaseConfig):
|
|||||||
|
|
||||||
self.app_title = app_config.app_title or None
|
self.app_title = app_config.app_title or None
|
||||||
self.app_logo = app_config.app_logo.url if app_config.app_logo else None
|
self.app_logo = app_config.app_logo.url if app_config.app_logo else None
|
||||||
|
|
||||||
|
|
||||||
@dataclasses.dataclass
|
|
||||||
class AIConfig(BaseConfig):
|
|
||||||
"""
|
|
||||||
AI related settings that require global scope
|
|
||||||
"""
|
|
||||||
|
|
||||||
ai_enabled: bool = dataclasses.field(init=False)
|
|
||||||
llm_backend: str = dataclasses.field(init=False)
|
|
||||||
llm_model: str = dataclasses.field(init=False)
|
|
||||||
llm_api_key: str = dataclasses.field(init=False)
|
|
||||||
llm_url: str = dataclasses.field(init=False)
|
|
||||||
|
|
||||||
def __post_init__(self) -> None:
|
|
||||||
app_config = self._get_config_instance()
|
|
||||||
|
|
||||||
self.ai_enabled = app_config.ai_enabled or settings.AI_ENABLED
|
|
||||||
self.llm_backend = app_config.llm_backend or settings.LLM_BACKEND
|
|
||||||
self.llm_model = app_config.llm_model or settings.LLM_MODEL
|
|
||||||
self.llm_api_key = app_config.llm_api_key or settings.LLM_API_KEY
|
|
||||||
self.llm_url = app_config.llm_url or settings.LLM_URL
|
|
||||||
|
@ -1,63 +0,0 @@
|
|||||||
# Generated by Django 5.1.7 on 2025-04-24 02:09
|
|
||||||
|
|
||||||
from django.db import migrations
|
|
||||||
from django.db import models
|
|
||||||
|
|
||||||
|
|
||||||
class Migration(migrations.Migration):
|
|
||||||
dependencies = [
|
|
||||||
("paperless", "0003_alter_applicationconfiguration_max_image_pixels"),
|
|
||||||
]
|
|
||||||
|
|
||||||
operations = [
|
|
||||||
migrations.AddField(
|
|
||||||
model_name="applicationconfiguration",
|
|
||||||
name="ai_enabled",
|
|
||||||
field=models.BooleanField(
|
|
||||||
default=False,
|
|
||||||
null=True,
|
|
||||||
verbose_name="Enables AI features",
|
|
||||||
),
|
|
||||||
),
|
|
||||||
migrations.AddField(
|
|
||||||
model_name="applicationconfiguration",
|
|
||||||
name="llm_api_key",
|
|
||||||
field=models.CharField(
|
|
||||||
blank=True,
|
|
||||||
max_length=128,
|
|
||||||
null=True,
|
|
||||||
verbose_name="Sets the LLM API key",
|
|
||||||
),
|
|
||||||
),
|
|
||||||
migrations.AddField(
|
|
||||||
model_name="applicationconfiguration",
|
|
||||||
name="llm_backend",
|
|
||||||
field=models.CharField(
|
|
||||||
blank=True,
|
|
||||||
choices=[("openai", "OpenAI"), ("ollama", "Ollama")],
|
|
||||||
max_length=32,
|
|
||||||
null=True,
|
|
||||||
verbose_name="Sets the LLM backend",
|
|
||||||
),
|
|
||||||
),
|
|
||||||
migrations.AddField(
|
|
||||||
model_name="applicationconfiguration",
|
|
||||||
name="llm_model",
|
|
||||||
field=models.CharField(
|
|
||||||
blank=True,
|
|
||||||
max_length=32,
|
|
||||||
null=True,
|
|
||||||
verbose_name="Sets the LLM model",
|
|
||||||
),
|
|
||||||
),
|
|
||||||
migrations.AddField(
|
|
||||||
model_name="applicationconfiguration",
|
|
||||||
name="llm_url",
|
|
||||||
field=models.CharField(
|
|
||||||
blank=True,
|
|
||||||
max_length=128,
|
|
||||||
null=True,
|
|
||||||
verbose_name="Sets the LLM URL, optional",
|
|
||||||
),
|
|
||||||
),
|
|
||||||
]
|
|
@ -74,15 +74,6 @@ class ColorConvertChoices(models.TextChoices):
|
|||||||
CMYK = ("CMYK", _("CMYK"))
|
CMYK = ("CMYK", _("CMYK"))
|
||||||
|
|
||||||
|
|
||||||
class LLMBackend(models.TextChoices):
|
|
||||||
"""
|
|
||||||
Matches to --llm-backend
|
|
||||||
"""
|
|
||||||
|
|
||||||
OPENAI = ("openai", _("OpenAI"))
|
|
||||||
OLLAMA = ("ollama", _("Ollama"))
|
|
||||||
|
|
||||||
|
|
||||||
class ApplicationConfiguration(AbstractSingletonModel):
|
class ApplicationConfiguration(AbstractSingletonModel):
|
||||||
"""
|
"""
|
||||||
Settings which are common across more than 1 parser
|
Settings which are common across more than 1 parser
|
||||||
@ -193,45 +184,6 @@ class ApplicationConfiguration(AbstractSingletonModel):
|
|||||||
upload_to="logo/",
|
upload_to="logo/",
|
||||||
)
|
)
|
||||||
|
|
||||||
"""
|
|
||||||
AI related settings
|
|
||||||
"""
|
|
||||||
|
|
||||||
ai_enabled = models.BooleanField(
|
|
||||||
verbose_name=_("Enables AI features"),
|
|
||||||
null=True,
|
|
||||||
default=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
llm_backend = models.CharField(
|
|
||||||
verbose_name=_("Sets the LLM backend"),
|
|
||||||
null=True,
|
|
||||||
blank=True,
|
|
||||||
max_length=32,
|
|
||||||
choices=LLMBackend.choices,
|
|
||||||
)
|
|
||||||
|
|
||||||
llm_model = models.CharField(
|
|
||||||
verbose_name=_("Sets the LLM model"),
|
|
||||||
null=True,
|
|
||||||
blank=True,
|
|
||||||
max_length=32,
|
|
||||||
)
|
|
||||||
|
|
||||||
llm_api_key = models.CharField(
|
|
||||||
verbose_name=_("Sets the LLM API key"),
|
|
||||||
null=True,
|
|
||||||
blank=True,
|
|
||||||
max_length=128,
|
|
||||||
)
|
|
||||||
|
|
||||||
llm_url = models.CharField(
|
|
||||||
verbose_name=_("Sets the LLM URL, optional"),
|
|
||||||
null=True,
|
|
||||||
blank=True,
|
|
||||||
max_length=128,
|
|
||||||
)
|
|
||||||
|
|
||||||
class Meta:
|
class Meta:
|
||||||
verbose_name = _("paperless application settings")
|
verbose_name = _("paperless application settings")
|
||||||
|
|
||||||
|
@ -185,10 +185,6 @@ class ProfileSerializer(serializers.ModelSerializer):
|
|||||||
|
|
||||||
class ApplicationConfigurationSerializer(serializers.ModelSerializer):
|
class ApplicationConfigurationSerializer(serializers.ModelSerializer):
|
||||||
user_args = serializers.JSONField(binary=True, allow_null=True)
|
user_args = serializers.JSONField(binary=True, allow_null=True)
|
||||||
llm_api_key = ObfuscatedPasswordField(
|
|
||||||
required=False,
|
|
||||||
allow_null=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
def run_validation(self, data):
|
def run_validation(self, data):
|
||||||
# Empty strings treated as None to avoid unexpected behavior
|
# Empty strings treated as None to avoid unexpected behavior
|
||||||
|
@ -1275,4 +1275,5 @@ AI_ENABLED = __get_boolean("PAPERLESS_AI_ENABLED", "NO")
|
|||||||
LLM_BACKEND = os.getenv("PAPERLESS_LLM_BACKEND", "openai") # or "ollama"
|
LLM_BACKEND = os.getenv("PAPERLESS_LLM_BACKEND", "openai") # or "ollama"
|
||||||
LLM_MODEL = os.getenv("PAPERLESS_LLM_MODEL")
|
LLM_MODEL = os.getenv("PAPERLESS_LLM_MODEL")
|
||||||
LLM_API_KEY = os.getenv("PAPERLESS_LLM_API_KEY")
|
LLM_API_KEY = os.getenv("PAPERLESS_LLM_API_KEY")
|
||||||
LLM_URL = os.getenv("PAPERLESS_LLM_URL")
|
OPENAI_URL = os.getenv("PAPERLESS_OPENAI_URL", "https://api.openai.com")
|
||||||
|
OLLAMA_URL = os.getenv("PAPERLESS_OLLAMA_URL", "http://localhost:11434")
|
||||||
|
@ -13,8 +13,7 @@ def mock_document():
|
|||||||
return Document(filename="test.pdf", content="This is a test document content.")
|
return Document(filename="test.pdf", content="This is a test document content.")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.django_db
|
@patch("paperless.ai.ai_classifier.run_llm_query")
|
||||||
@patch("paperless.ai.client.AIClient.run_llm_query")
|
|
||||||
def test_get_ai_document_classification_success(mock_run_llm_query, mock_document):
|
def test_get_ai_document_classification_success(mock_run_llm_query, mock_document):
|
||||||
mock_response = json.dumps(
|
mock_response = json.dumps(
|
||||||
{
|
{
|
||||||
@ -38,8 +37,7 @@ def test_get_ai_document_classification_success(mock_run_llm_query, mock_documen
|
|||||||
assert result["dates"] == ["2023-01-01"]
|
assert result["dates"] == ["2023-01-01"]
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.django_db
|
@patch("paperless.ai.ai_classifier.run_llm_query")
|
||||||
@patch("paperless.ai.client.AIClient.run_llm_query")
|
|
||||||
def test_get_ai_document_classification_failure(mock_run_llm_query, mock_document):
|
def test_get_ai_document_classification_failure(mock_run_llm_query, mock_document):
|
||||||
mock_run_llm_query.side_effect = Exception("LLM query failed")
|
mock_run_llm_query.side_effect = Exception("LLM query failed")
|
||||||
|
|
||||||
|
@ -4,7 +4,9 @@ from unittest.mock import patch
|
|||||||
import pytest
|
import pytest
|
||||||
from django.conf import settings
|
from django.conf import settings
|
||||||
|
|
||||||
from paperless.ai.client import AIClient
|
from paperless.ai.client import _run_ollama_query
|
||||||
|
from paperless.ai.client import _run_openai_query
|
||||||
|
from paperless.ai.client import run_llm_query
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
@ -12,59 +14,52 @@ def mock_settings():
|
|||||||
settings.LLM_BACKEND = "openai"
|
settings.LLM_BACKEND = "openai"
|
||||||
settings.LLM_MODEL = "gpt-3.5-turbo"
|
settings.LLM_MODEL = "gpt-3.5-turbo"
|
||||||
settings.LLM_API_KEY = "test-api-key"
|
settings.LLM_API_KEY = "test-api-key"
|
||||||
|
settings.OPENAI_URL = "https://api.openai.com"
|
||||||
|
settings.OLLAMA_URL = "https://ollama.example.com"
|
||||||
yield settings
|
yield settings
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.django_db
|
@patch("paperless.ai.client._run_openai_query")
|
||||||
@patch("paperless.ai.client.AIClient._run_openai_query")
|
@patch("paperless.ai.client._run_ollama_query")
|
||||||
@patch("paperless.ai.client.AIClient._run_ollama_query")
|
|
||||||
def test_run_llm_query_openai(mock_ollama_query, mock_openai_query, mock_settings):
|
def test_run_llm_query_openai(mock_ollama_query, mock_openai_query, mock_settings):
|
||||||
mock_settings.LLM_BACKEND = "openai"
|
|
||||||
mock_openai_query.return_value = "OpenAI response"
|
mock_openai_query.return_value = "OpenAI response"
|
||||||
client = AIClient()
|
result = run_llm_query("Test prompt")
|
||||||
result = client.run_llm_query("Test prompt")
|
|
||||||
assert result == "OpenAI response"
|
assert result == "OpenAI response"
|
||||||
mock_openai_query.assert_called_once_with("Test prompt")
|
mock_openai_query.assert_called_once_with("Test prompt")
|
||||||
mock_ollama_query.assert_not_called()
|
mock_ollama_query.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.django_db
|
@patch("paperless.ai.client._run_openai_query")
|
||||||
@patch("paperless.ai.client.AIClient._run_openai_query")
|
@patch("paperless.ai.client._run_ollama_query")
|
||||||
@patch("paperless.ai.client.AIClient._run_ollama_query")
|
|
||||||
def test_run_llm_query_ollama(mock_ollama_query, mock_openai_query, mock_settings):
|
def test_run_llm_query_ollama(mock_ollama_query, mock_openai_query, mock_settings):
|
||||||
mock_settings.LLM_BACKEND = "ollama"
|
mock_settings.LLM_BACKEND = "ollama"
|
||||||
mock_ollama_query.return_value = "Ollama response"
|
mock_ollama_query.return_value = "Ollama response"
|
||||||
client = AIClient()
|
result = run_llm_query("Test prompt")
|
||||||
result = client.run_llm_query("Test prompt")
|
|
||||||
assert result == "Ollama response"
|
assert result == "Ollama response"
|
||||||
mock_ollama_query.assert_called_once_with("Test prompt")
|
mock_ollama_query.assert_called_once_with("Test prompt")
|
||||||
mock_openai_query.assert_not_called()
|
mock_openai_query.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.django_db
|
|
||||||
def test_run_llm_query_unsupported_backend(mock_settings):
|
def test_run_llm_query_unsupported_backend(mock_settings):
|
||||||
mock_settings.LLM_BACKEND = "unsupported"
|
mock_settings.LLM_BACKEND = "unsupported"
|
||||||
client = AIClient()
|
|
||||||
with pytest.raises(ValueError, match="Unsupported LLM backend: unsupported"):
|
with pytest.raises(ValueError, match="Unsupported LLM backend: unsupported"):
|
||||||
client.run_llm_query("Test prompt")
|
run_llm_query("Test prompt")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.django_db
|
|
||||||
def test_run_openai_query(httpx_mock, mock_settings):
|
def test_run_openai_query(httpx_mock, mock_settings):
|
||||||
mock_settings.LLM_BACKEND = "openai"
|
|
||||||
httpx_mock.add_response(
|
httpx_mock.add_response(
|
||||||
url="https://api.openai.com/v1/chat/completions",
|
url=f"{mock_settings.OPENAI_URL}/v1/chat/completions",
|
||||||
json={
|
json={
|
||||||
"choices": [{"message": {"content": "OpenAI response"}}],
|
"choices": [{"message": {"content": "OpenAI response"}}],
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
client = AIClient()
|
result = _run_openai_query("Test prompt")
|
||||||
result = client.run_llm_query("Test prompt")
|
|
||||||
assert result == "OpenAI response"
|
assert result == "OpenAI response"
|
||||||
|
|
||||||
request = httpx_mock.get_request()
|
request = httpx_mock.get_request()
|
||||||
assert request.method == "POST"
|
assert request.method == "POST"
|
||||||
|
assert request.url == f"{mock_settings.OPENAI_URL}/v1/chat/completions"
|
||||||
assert request.headers["Authorization"] == f"Bearer {mock_settings.LLM_API_KEY}"
|
assert request.headers["Authorization"] == f"Bearer {mock_settings.LLM_API_KEY}"
|
||||||
assert request.headers["Content-Type"] == "application/json"
|
assert request.headers["Content-Type"] == "application/json"
|
||||||
assert json.loads(request.content) == {
|
assert json.loads(request.content) == {
|
||||||
@ -74,20 +69,18 @@ def test_run_openai_query(httpx_mock, mock_settings):
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.django_db
|
|
||||||
def test_run_ollama_query(httpx_mock, mock_settings):
|
def test_run_ollama_query(httpx_mock, mock_settings):
|
||||||
mock_settings.LLM_BACKEND = "ollama"
|
|
||||||
httpx_mock.add_response(
|
httpx_mock.add_response(
|
||||||
url="http://localhost:11434/api/chat",
|
url=f"{mock_settings.OLLAMA_URL}/api/chat",
|
||||||
json={"message": {"content": "Ollama response"}},
|
json={"message": {"content": "Ollama response"}},
|
||||||
)
|
)
|
||||||
|
|
||||||
client = AIClient()
|
result = _run_ollama_query("Test prompt")
|
||||||
result = client.run_llm_query("Test prompt")
|
|
||||||
assert result == "Ollama response"
|
assert result == "Ollama response"
|
||||||
|
|
||||||
request = httpx_mock.get_request()
|
request = httpx_mock.get_request()
|
||||||
assert request.method == "POST"
|
assert request.method == "POST"
|
||||||
|
assert request.url == f"{mock_settings.OLLAMA_URL}/api/chat"
|
||||||
assert json.loads(request.content) == {
|
assert json.loads(request.content) == {
|
||||||
"model": mock_settings.LLM_MODEL,
|
"model": mock_settings.LLM_MODEL,
|
||||||
"messages": [{"role": "user", "content": "Test prompt"}],
|
"messages": [{"role": "user", "content": "Test prompt"}],
|
||||||
|
Loading…
x
Reference in New Issue
Block a user