LLM Engine#

The LLM Engine Processor is a processor that generates text using a language model. It is an essential component in the OpenPlugin system for tasks that require natural language generation.

Supported Input Port:

text: The LLM Engine Processor accepts input through the “text” port. The input should be a string that serves as a prompt for the language model.

Supported Output Port:

text: The processor produces output through the “text” port. The output is a string generated by the language model based on the provided prompt.

List of Implementations:#

OpenAI Implementation#

The BeautifulSoup implementation of the HTML to Text Processor uses the BeautifulSoup library to parse the HTML and extract the text content.

Metadata

Field

Type

Required

User Provided

Description

model_name

string

no

no

The name of the language model to be used. The default value is “gpt-3.5-turbo”.

pre_prompt

string

no

no

An optional string that is prepended to the input text before it is fed to the language model. The default value is null.

openai_api_key

string

yes

yes

The API key for accessing OpenAI’s language models. This key is user-provided.

Sample processor configuration:#

NOTE: Processor is always added to a module(Input or Output). The module is then added to the pipeline.

 {
    "processor_type": "llm_engine",
    "processor_implementation_type": "llm_engine_with_openai",
    "input_port": "text",
    "output_port": "text",
    "metadata": {},
}